00:00:00.001 Started by upstream project "autotest-nightly" build number 4359 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3722 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.091 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.093 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.126 Fetching changes from the remote Git repository 00:00:00.128 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.180 Using shallow fetch with depth 1 00:00:00.180 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.180 > git --version # timeout=10 00:00:00.230 > git --version # 'git version 2.39.2' 00:00:00.230 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.266 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.266 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.594 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.604 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.615 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.615 > git config core.sparsecheckout # timeout=10 00:00:05.628 > git read-tree -mu HEAD # timeout=10 00:00:05.646 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.661 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.661 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.744 [Pipeline] Start of Pipeline 00:00:05.755 [Pipeline] library 00:00:05.756 Loading library shm_lib@master 00:00:05.756 Library shm_lib@master is cached. Copying from home. 00:00:05.768 [Pipeline] node 00:00:05.783 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.784 [Pipeline] { 00:00:05.792 [Pipeline] catchError 00:00:05.793 [Pipeline] { 00:00:05.800 [Pipeline] wrap 00:00:05.805 [Pipeline] { 00:00:05.810 [Pipeline] stage 00:00:05.812 [Pipeline] { (Prologue) 00:00:05.822 [Pipeline] echo 00:00:05.823 Node: VM-host-SM38 00:00:05.827 [Pipeline] cleanWs 00:00:05.837 [WS-CLEANUP] Deleting project workspace... 00:00:05.837 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.845 [WS-CLEANUP] done 00:00:06.125 [Pipeline] setCustomBuildProperty 00:00:06.197 [Pipeline] httpRequest 00:00:07.040 [Pipeline] echo 00:00:07.041 Sorcerer 10.211.164.20 is alive 00:00:07.051 [Pipeline] retry 00:00:07.053 [Pipeline] { 00:00:07.066 [Pipeline] httpRequest 00:00:07.071 HttpMethod: GET 00:00:07.072 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.072 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.074 Response Code: HTTP/1.1 200 OK 00:00:07.075 Success: Status code 200 is in the accepted range: 200,404 00:00:07.075 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.125 [Pipeline] } 00:00:08.140 [Pipeline] // retry 00:00:08.146 [Pipeline] sh 00:00:08.430 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.446 [Pipeline] httpRequest 00:00:09.054 [Pipeline] echo 00:00:09.055 Sorcerer 10.211.164.20 is alive 00:00:09.063 [Pipeline] retry 00:00:09.064 [Pipeline] { 00:00:09.077 [Pipeline] httpRequest 00:00:09.083 HttpMethod: GET 00:00:09.084 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:09.084 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:09.098 Response Code: HTTP/1.1 200 OK 00:00:09.099 Success: Status code 200 is in the accepted range: 200,404 00:00:09.100 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:54.263 [Pipeline] } 00:00:54.281 [Pipeline] // retry 00:00:54.289 [Pipeline] sh 00:00:54.580 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:57.137 [Pipeline] sh 00:00:57.423 + git -C spdk log --oneline -n5 00:00:57.423 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:57.423 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:57.423 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:57.423 66289a6db build: use VERSION file for storing version 00:00:57.423 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:57.443 [Pipeline] writeFile 00:00:57.457 [Pipeline] sh 00:00:57.743 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:57.757 [Pipeline] sh 00:00:58.043 + cat autorun-spdk.conf 00:00:58.043 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.043 SPDK_TEST_NVME=1 00:00:58.043 SPDK_TEST_FTL=1 00:00:58.043 SPDK_TEST_ISAL=1 00:00:58.043 SPDK_RUN_ASAN=1 00:00:58.043 SPDK_RUN_UBSAN=1 00:00:58.043 SPDK_TEST_XNVME=1 00:00:58.043 SPDK_TEST_NVME_FDP=1 00:00:58.043 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.051 RUN_NIGHTLY=1 00:00:58.053 [Pipeline] } 00:00:58.066 [Pipeline] // stage 00:00:58.080 [Pipeline] stage 00:00:58.082 [Pipeline] { (Run VM) 00:00:58.094 [Pipeline] sh 00:00:58.378 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:58.378 + echo 'Start stage prepare_nvme.sh' 00:00:58.378 Start stage prepare_nvme.sh 00:00:58.378 + [[ -n 6 ]] 00:00:58.378 + disk_prefix=ex6 00:00:58.378 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:58.378 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:58.378 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:58.378 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.378 ++ SPDK_TEST_NVME=1 00:00:58.378 ++ SPDK_TEST_FTL=1 00:00:58.378 ++ SPDK_TEST_ISAL=1 00:00:58.379 ++ SPDK_RUN_ASAN=1 00:00:58.379 ++ SPDK_RUN_UBSAN=1 00:00:58.379 ++ SPDK_TEST_XNVME=1 00:00:58.379 ++ SPDK_TEST_NVME_FDP=1 00:00:58.379 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.379 ++ RUN_NIGHTLY=1 00:00:58.379 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:58.379 + nvme_files=() 00:00:58.379 + declare -A nvme_files 00:00:58.379 + backend_dir=/var/lib/libvirt/images/backends 00:00:58.379 + nvme_files['nvme.img']=5G 00:00:58.379 + nvme_files['nvme-cmb.img']=5G 00:00:58.379 + nvme_files['nvme-multi0.img']=4G 00:00:58.379 + nvme_files['nvme-multi1.img']=4G 00:00:58.379 + nvme_files['nvme-multi2.img']=4G 00:00:58.379 + nvme_files['nvme-openstack.img']=8G 00:00:58.379 + nvme_files['nvme-zns.img']=5G 00:00:58.379 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:58.379 + (( SPDK_TEST_FTL == 1 )) 00:00:58.379 + nvme_files["nvme-ftl.img"]=6G 00:00:58.379 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:58.379 + nvme_files["nvme-fdp.img"]=1G 00:00:58.379 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:58.379 + for nvme in "${!nvme_files[@]}" 00:00:58.379 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:58.379 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.379 + for nvme in "${!nvme_files[@]}" 00:00:58.379 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:00:59.324 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:59.324 + for nvme in "${!nvme_files[@]}" 00:00:59.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:59.324 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.324 + for nvme in "${!nvme_files[@]}" 00:00:59.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:59.324 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:59.324 + for nvme in "${!nvme_files[@]}" 00:00:59.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:59.324 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.324 + for nvme in "${!nvme_files[@]}" 00:00:59.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:59.324 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:59.324 + for nvme in "${!nvme_files[@]}" 00:00:59.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:59.324 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:59.324 + for nvme in "${!nvme_files[@]}" 00:00:59.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:00:59.585 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:59.585 + for nvme in "${!nvme_files[@]}" 00:00:59.586 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:59.586 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.586 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:59.586 + echo 'End stage prepare_nvme.sh' 00:00:59.586 End stage prepare_nvme.sh 00:00:59.599 [Pipeline] sh 00:00:59.885 + DISTRO=fedora39 00:00:59.885 + CPUS=10 00:00:59.885 + RAM=12288 00:00:59.885 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:59.885 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:59.885 00:00:59.885 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:59.885 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:59.885 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:59.885 HELP=0 00:00:59.885 DRY_RUN=0 00:00:59.885 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:00:59.885 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:59.885 NVME_AUTO_CREATE=0 00:00:59.885 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:00:59.885 NVME_CMB=,,,, 00:00:59.885 NVME_PMR=,,,, 00:00:59.885 NVME_ZNS=,,,, 00:00:59.885 NVME_MS=true,,,, 00:00:59.885 NVME_FDP=,,,on, 00:00:59.885 SPDK_VAGRANT_DISTRO=fedora39 00:00:59.885 SPDK_VAGRANT_VMCPU=10 00:00:59.885 SPDK_VAGRANT_VMRAM=12288 00:00:59.885 SPDK_VAGRANT_PROVIDER=libvirt 00:00:59.885 SPDK_VAGRANT_HTTP_PROXY= 00:00:59.885 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:59.885 SPDK_OPENSTACK_NETWORK=0 00:00:59.885 VAGRANT_PACKAGE_BOX=0 00:00:59.885 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:59.885 FORCE_DISTRO=true 00:00:59.885 VAGRANT_BOX_VERSION= 00:00:59.885 EXTRA_VAGRANTFILES= 00:00:59.885 NIC_MODEL=e1000 00:00:59.885 00:00:59.885 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:59.885 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:02.438 Bringing machine 'default' up with 'libvirt' provider... 00:01:02.700 ==> default: Creating image (snapshot of base box volume). 00:01:02.961 ==> default: Creating domain with the following settings... 00:01:02.961 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734129701_9eedd8917a539bb11378 00:01:02.961 ==> default: -- Domain type: kvm 00:01:02.961 ==> default: -- Cpus: 10 00:01:02.961 ==> default: -- Feature: acpi 00:01:02.961 ==> default: -- Feature: apic 00:01:02.961 ==> default: -- Feature: pae 00:01:02.961 ==> default: -- Memory: 12288M 00:01:02.961 ==> default: -- Memory Backing: hugepages: 00:01:02.961 ==> default: -- Management MAC: 00:01:02.961 ==> default: -- Loader: 00:01:02.961 ==> default: -- Nvram: 00:01:02.961 ==> default: -- Base box: spdk/fedora39 00:01:02.961 ==> default: -- Storage pool: default 00:01:02.961 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734129701_9eedd8917a539bb11378.img (20G) 00:01:02.961 ==> default: -- Volume Cache: default 00:01:02.961 ==> default: -- Kernel: 00:01:02.961 ==> default: -- Initrd: 00:01:02.961 ==> default: -- Graphics Type: vnc 00:01:02.961 ==> default: -- Graphics Port: -1 00:01:02.961 ==> default: -- Graphics IP: 127.0.0.1 00:01:02.961 ==> default: -- Graphics Password: Not defined 00:01:02.961 ==> default: -- Video Type: cirrus 00:01:02.961 ==> default: -- Video VRAM: 9216 00:01:02.961 ==> default: -- Sound Type: 00:01:02.961 ==> default: -- Keymap: en-us 00:01:02.961 ==> default: -- TPM Path: 00:01:02.961 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:02.961 ==> default: -- Command line args: 00:01:02.961 ==> default: -> value=-device, 00:01:02.961 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:02.961 ==> default: -> value=-drive, 00:01:02.961 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:02.961 ==> default: -> value=-device, 00:01:02.961 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:02.961 ==> default: -> value=-device, 00:01:02.961 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:02.961 ==> default: -> value=-drive, 00:01:02.961 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:02.961 ==> default: -> value=-device, 00:01:02.961 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.961 ==> default: -> value=-device, 00:01:02.961 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:02.961 ==> default: -> value=-drive, 00:01:02.961 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:02.962 ==> default: -> value=-device, 00:01:02.962 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.962 ==> default: -> value=-drive, 00:01:02.962 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:02.962 ==> default: -> value=-device, 00:01:02.962 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.962 ==> default: -> value=-drive, 00:01:02.962 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:02.962 ==> default: -> value=-device, 00:01:02.962 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.962 ==> default: -> value=-device, 00:01:02.962 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:02.962 ==> default: -> value=-device, 00:01:02.962 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:02.962 ==> default: -> value=-drive, 00:01:02.962 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:02.962 ==> default: -> value=-device, 00:01:02.962 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.225 ==> default: Creating shared folders metadata... 00:01:03.225 ==> default: Starting domain. 00:01:05.142 ==> default: Waiting for domain to get an IP address... 00:01:23.266 ==> default: Waiting for SSH to become available... 00:01:23.266 ==> default: Configuring and enabling network interfaces... 00:01:26.565 default: SSH address: 192.168.121.227:22 00:01:26.565 default: SSH username: vagrant 00:01:26.565 default: SSH auth method: private key 00:01:28.479 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:36.642 ==> default: Mounting SSHFS shared folder... 00:01:38.029 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:38.029 ==> default: Checking Mount.. 00:01:39.419 ==> default: Folder Successfully Mounted! 00:01:39.419 00:01:39.419 SUCCESS! 00:01:39.419 00:01:39.419 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:39.419 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:39.419 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:39.419 00:01:39.430 [Pipeline] } 00:01:39.445 [Pipeline] // stage 00:01:39.453 [Pipeline] dir 00:01:39.454 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:39.455 [Pipeline] { 00:01:39.467 [Pipeline] catchError 00:01:39.468 [Pipeline] { 00:01:39.479 [Pipeline] sh 00:01:39.762 + vagrant ssh-config --host vagrant 00:01:39.763 + sed -ne '/^Host/,$p' 00:01:39.763 + tee ssh_conf 00:01:42.311 Host vagrant 00:01:42.311 HostName 192.168.121.227 00:01:42.312 User vagrant 00:01:42.312 Port 22 00:01:42.312 UserKnownHostsFile /dev/null 00:01:42.312 StrictHostKeyChecking no 00:01:42.312 PasswordAuthentication no 00:01:42.312 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:42.312 IdentitiesOnly yes 00:01:42.312 LogLevel FATAL 00:01:42.312 ForwardAgent yes 00:01:42.312 ForwardX11 yes 00:01:42.312 00:01:42.327 [Pipeline] withEnv 00:01:42.330 [Pipeline] { 00:01:42.344 [Pipeline] sh 00:01:42.630 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:42.630 source /etc/os-release 00:01:42.630 [[ -e /image.version ]] && img=$(< /image.version) 00:01:42.630 # Minimal, systemd-like check. 00:01:42.630 if [[ -e /.dockerenv ]]; then 00:01:42.630 # Clear garbage from the node'\''s name: 00:01:42.630 # agt-er_autotest_547-896 -> autotest_547-896 00:01:42.630 # $HOSTNAME is the actual container id 00:01:42.630 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:42.630 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:42.630 # We can assume this is a mount from a host where container is running, 00:01:42.630 # so fetch its hostname to easily identify the target swarm worker. 00:01:42.630 container="$(< /etc/hostname) ($agent)" 00:01:42.630 else 00:01:42.630 # Fallback 00:01:42.630 container=$agent 00:01:42.630 fi 00:01:42.630 fi 00:01:42.630 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:42.630 ' 00:01:42.905 [Pipeline] } 00:01:42.920 [Pipeline] // withEnv 00:01:42.928 [Pipeline] setCustomBuildProperty 00:01:42.942 [Pipeline] stage 00:01:42.944 [Pipeline] { (Tests) 00:01:42.960 [Pipeline] sh 00:01:43.245 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:43.522 [Pipeline] sh 00:01:43.807 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:44.085 [Pipeline] timeout 00:01:44.086 Timeout set to expire in 50 min 00:01:44.088 [Pipeline] { 00:01:44.101 [Pipeline] sh 00:01:44.386 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:44.961 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:44.975 [Pipeline] sh 00:01:45.259 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:45.534 [Pipeline] sh 00:01:45.816 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:46.104 [Pipeline] sh 00:01:46.383 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:46.645 ++ readlink -f spdk_repo 00:01:46.645 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:46.645 + [[ -n /home/vagrant/spdk_repo ]] 00:01:46.645 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:46.645 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:46.645 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:46.645 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:46.645 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:46.645 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:46.645 + cd /home/vagrant/spdk_repo 00:01:46.645 + source /etc/os-release 00:01:46.645 ++ NAME='Fedora Linux' 00:01:46.645 ++ VERSION='39 (Cloud Edition)' 00:01:46.645 ++ ID=fedora 00:01:46.645 ++ VERSION_ID=39 00:01:46.645 ++ VERSION_CODENAME= 00:01:46.645 ++ PLATFORM_ID=platform:f39 00:01:46.645 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:46.645 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:46.645 ++ LOGO=fedora-logo-icon 00:01:46.645 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:46.645 ++ HOME_URL=https://fedoraproject.org/ 00:01:46.645 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:46.645 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:46.645 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:46.645 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:46.645 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:46.645 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:46.645 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:46.645 ++ SUPPORT_END=2024-11-12 00:01:46.645 ++ VARIANT='Cloud Edition' 00:01:46.645 ++ VARIANT_ID=cloud 00:01:46.645 + uname -a 00:01:46.645 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:46.645 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:46.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:47.168 Hugepages 00:01:47.168 node hugesize free / total 00:01:47.168 node0 1048576kB 0 / 0 00:01:47.168 node0 2048kB 0 / 0 00:01:47.168 00:01:47.168 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:47.168 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:47.168 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:47.168 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:47.429 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:47.429 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:47.429 + rm -f /tmp/spdk-ld-path 00:01:47.429 + source autorun-spdk.conf 00:01:47.429 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.429 ++ SPDK_TEST_NVME=1 00:01:47.429 ++ SPDK_TEST_FTL=1 00:01:47.429 ++ SPDK_TEST_ISAL=1 00:01:47.429 ++ SPDK_RUN_ASAN=1 00:01:47.429 ++ SPDK_RUN_UBSAN=1 00:01:47.429 ++ SPDK_TEST_XNVME=1 00:01:47.429 ++ SPDK_TEST_NVME_FDP=1 00:01:47.429 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.429 ++ RUN_NIGHTLY=1 00:01:47.429 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:47.429 + [[ -n '' ]] 00:01:47.429 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:47.429 + for M in /var/spdk/build-*-manifest.txt 00:01:47.429 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:47.429 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.429 + for M in /var/spdk/build-*-manifest.txt 00:01:47.429 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:47.429 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.429 + for M in /var/spdk/build-*-manifest.txt 00:01:47.429 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:47.429 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.429 ++ uname 00:01:47.429 + [[ Linux == \L\i\n\u\x ]] 00:01:47.429 + sudo dmesg -T 00:01:47.429 + sudo dmesg --clear 00:01:47.429 + dmesg_pid=5023 00:01:47.429 + [[ Fedora Linux == FreeBSD ]] 00:01:47.429 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.429 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.429 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:47.429 + sudo dmesg -Tw 00:01:47.429 + [[ -x /usr/src/fio-static/fio ]] 00:01:47.429 + export FIO_BIN=/usr/src/fio-static/fio 00:01:47.429 + FIO_BIN=/usr/src/fio-static/fio 00:01:47.429 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:47.429 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:47.429 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:47.429 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.429 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.429 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:47.429 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.429 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.429 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.429 22:42:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:47.429 22:42:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.429 22:42:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.429 22:42:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:47.429 22:42:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:47.429 22:42:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:47.429 22:42:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:47.429 22:42:26 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:47.429 22:42:26 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:47.429 22:42:26 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:47.429 22:42:26 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.429 22:42:26 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:47.429 22:42:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:47.429 22:42:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.690 22:42:26 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:47.690 22:42:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:47.690 22:42:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:47.690 22:42:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.690 22:42:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.690 22:42:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.690 22:42:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.690 22:42:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.690 22:42:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.690 22:42:26 -- paths/export.sh@5 -- $ export PATH 00:01:47.690 22:42:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.690 22:42:26 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:47.690 22:42:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:47.690 22:42:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734129746.XXXXXX 00:01:47.690 22:42:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734129746.OWtsb1 00:01:47.690 22:42:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:47.690 22:42:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:47.690 22:42:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:47.690 22:42:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:47.690 22:42:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.690 22:42:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:47.690 22:42:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:47.690 22:42:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.690 22:42:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:47.690 22:42:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:47.690 22:42:26 -- pm/common@17 -- $ local monitor 00:01:47.690 22:42:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.690 22:42:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.690 22:42:26 -- pm/common@25 -- $ sleep 1 00:01:47.690 22:42:26 -- pm/common@21 -- $ date +%s 00:01:47.690 22:42:26 -- pm/common@21 -- $ date +%s 00:01:47.690 22:42:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734129746 00:01:47.690 22:42:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734129746 00:01:47.690 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734129746_collect-vmstat.pm.log 00:01:47.690 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734129746_collect-cpu-load.pm.log 00:01:48.633 22:42:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:48.633 22:42:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.633 22:42:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.633 22:42:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:48.633 22:42:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.633 Fri Dec 13 10:42:27 PM UTC 2024 00:01:48.633 22:42:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:48.633 v25.01-rc1-2-ge01cb43b8 00:01:48.633 22:42:27 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:48.633 22:42:27 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:48.633 22:42:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:48.633 22:42:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.633 22:42:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.633 ************************************ 00:01:48.633 START TEST asan 00:01:48.633 ************************************ 00:01:48.633 using asan 00:01:48.633 22:42:27 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:48.633 00:01:48.633 real 0m0.000s 00:01:48.633 user 0m0.000s 00:01:48.633 sys 0m0.000s 00:01:48.633 ************************************ 00:01:48.633 END TEST asan 00:01:48.633 ************************************ 00:01:48.633 22:42:27 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:48.633 22:42:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.633 22:42:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:48.633 22:42:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:48.633 22:42:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:48.633 22:42:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.633 22:42:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.633 ************************************ 00:01:48.633 START TEST ubsan 00:01:48.633 ************************************ 00:01:48.633 using ubsan 00:01:48.633 22:42:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:48.633 ************************************ 00:01:48.633 END TEST ubsan 00:01:48.633 ************************************ 00:01:48.633 00:01:48.633 real 0m0.000s 00:01:48.633 user 0m0.000s 00:01:48.633 sys 0m0.000s 00:01:48.633 22:42:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:48.633 22:42:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.895 22:42:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:48.895 22:42:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:48.895 22:42:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:48.895 22:42:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:48.895 22:42:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:48.895 22:42:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:48.895 22:42:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:48.895 22:42:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:48.895 22:42:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:48.895 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:48.895 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:49.467 Using 'verbs' RDMA provider 00:02:02.281 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:12.320 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:12.321 Creating mk/config.mk...done. 00:02:12.321 Creating mk/cc.flags.mk...done. 00:02:12.321 Type 'make' to build. 00:02:12.321 22:42:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:12.321 22:42:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:12.321 22:42:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:12.321 22:42:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.321 ************************************ 00:02:12.321 START TEST make 00:02:12.321 ************************************ 00:02:12.321 22:42:51 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:12.321 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:12.321 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:12.321 meson setup builddir \ 00:02:12.321 -Dwith-libaio=enabled \ 00:02:12.321 -Dwith-liburing=enabled \ 00:02:12.321 -Dwith-libvfn=disabled \ 00:02:12.321 -Dwith-spdk=disabled \ 00:02:12.321 -Dexamples=false \ 00:02:12.321 -Dtests=false \ 00:02:12.321 -Dtools=false && \ 00:02:12.321 meson compile -C builddir && \ 00:02:12.321 cd -) 00:02:14.872 The Meson build system 00:02:14.872 Version: 1.5.0 00:02:14.872 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:14.872 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:14.872 Build type: native build 00:02:14.872 Project name: xnvme 00:02:14.872 Project version: 0.7.5 00:02:14.872 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.872 C linker for the host machine: cc ld.bfd 2.40-14 00:02:14.872 Host machine cpu family: x86_64 00:02:14.872 Host machine cpu: x86_64 00:02:14.872 Message: host_machine.system: linux 00:02:14.872 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:14.872 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:14.872 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:14.872 Run-time dependency threads found: YES 00:02:14.872 Has header "setupapi.h" : NO 00:02:14.872 Has header "linux/blkzoned.h" : YES 00:02:14.872 Has header "linux/blkzoned.h" : YES (cached) 00:02:14.872 Has header "libaio.h" : YES 00:02:14.872 Library aio found: YES 00:02:14.872 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.872 Run-time dependency liburing found: YES 2.2 00:02:14.872 Dependency libvfn skipped: feature with-libvfn disabled 00:02:14.872 Found CMake: /usr/bin/cmake (3.27.7) 00:02:14.872 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:14.872 Subproject spdk : skipped: feature with-spdk disabled 00:02:14.872 Run-time dependency appleframeworks found: NO (tried framework) 00:02:14.872 Run-time dependency appleframeworks found: NO (tried framework) 00:02:14.872 Library rt found: YES 00:02:14.872 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:14.872 Configuring xnvme_config.h using configuration 00:02:14.872 Configuring xnvme.spec using configuration 00:02:14.872 Run-time dependency bash-completion found: YES 2.11 00:02:14.872 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:14.872 Program cp found: YES (/usr/bin/cp) 00:02:14.872 Build targets in project: 3 00:02:14.872 00:02:14.872 xnvme 0.7.5 00:02:14.872 00:02:14.872 Subprojects 00:02:14.872 spdk : NO Feature 'with-spdk' disabled 00:02:14.872 00:02:14.872 User defined options 00:02:14.872 examples : false 00:02:14.872 tests : false 00:02:14.872 tools : false 00:02:14.872 with-libaio : enabled 00:02:14.872 with-liburing: enabled 00:02:14.872 with-libvfn : disabled 00:02:14.872 with-spdk : disabled 00:02:14.872 00:02:14.872 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.134 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:15.134 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:15.134 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:15.134 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:15.134 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:15.134 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:15.134 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:15.134 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:15.395 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:15.395 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:15.395 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:15.395 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:15.395 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:15.395 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:15.395 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:15.395 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:15.395 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:15.395 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:15.395 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:15.395 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:15.395 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:15.395 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:15.395 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:15.395 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:15.395 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:15.395 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:15.395 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:15.395 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:15.395 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:15.395 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:15.655 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:15.655 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:15.655 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:15.655 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:15.655 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:15.655 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:15.655 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:15.655 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:15.655 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:15.655 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:15.655 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:15.655 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:15.655 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:15.655 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:15.655 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:15.655 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:15.655 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:15.655 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:15.655 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:15.655 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:15.655 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:15.655 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:15.655 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:15.655 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:15.655 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:15.655 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:15.655 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:15.655 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:15.655 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:15.655 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:15.655 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:15.914 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:15.914 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:15.914 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:15.914 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:15.914 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:15.914 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:15.914 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:15.914 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:15.914 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:15.914 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:15.914 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:15.914 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:15.914 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:16.478 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:16.478 [75/76] Linking static target lib/libxnvme.a 00:02:16.478 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:16.478 INFO: autodetecting backend as ninja 00:02:16.478 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:16.478 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:23.040 The Meson build system 00:02:23.040 Version: 1.5.0 00:02:23.040 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:23.040 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:23.040 Build type: native build 00:02:23.040 Program cat found: YES (/usr/bin/cat) 00:02:23.040 Project name: DPDK 00:02:23.040 Project version: 24.03.0 00:02:23.040 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.040 C linker for the host machine: cc ld.bfd 2.40-14 00:02:23.040 Host machine cpu family: x86_64 00:02:23.040 Host machine cpu: x86_64 00:02:23.040 Message: ## Building in Developer Mode ## 00:02:23.040 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.040 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:23.040 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.040 Program python3 found: YES (/usr/bin/python3) 00:02:23.040 Program cat found: YES (/usr/bin/cat) 00:02:23.040 Compiler for C supports arguments -march=native: YES 00:02:23.040 Checking for size of "void *" : 8 00:02:23.040 Checking for size of "void *" : 8 (cached) 00:02:23.040 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:23.040 Library m found: YES 00:02:23.040 Library numa found: YES 00:02:23.040 Has header "numaif.h" : YES 00:02:23.040 Library fdt found: NO 00:02:23.040 Library execinfo found: NO 00:02:23.040 Has header "execinfo.h" : YES 00:02:23.040 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.040 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.040 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.040 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.040 Run-time dependency openssl found: YES 3.1.1 00:02:23.040 Run-time dependency libpcap found: YES 1.10.4 00:02:23.040 Has header "pcap.h" with dependency libpcap: YES 00:02:23.040 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.040 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.040 Compiler for C supports arguments -Wformat: YES 00:02:23.040 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.040 Compiler for C supports arguments -Wformat-security: NO 00:02:23.040 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.040 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.040 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.040 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.040 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.040 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.040 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.040 Compiler for C supports arguments -Wundef: YES 00:02:23.040 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.040 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.040 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.040 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.040 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.040 Program objdump found: YES (/usr/bin/objdump) 00:02:23.040 Compiler for C supports arguments -mavx512f: YES 00:02:23.040 Checking if "AVX512 checking" compiles: YES 00:02:23.040 Fetching value of define "__SSE4_2__" : 1 00:02:23.040 Fetching value of define "__AES__" : 1 00:02:23.040 Fetching value of define "__AVX__" : 1 00:02:23.040 Fetching value of define "__AVX2__" : 1 00:02:23.040 Fetching value of define "__AVX512BW__" : 1 00:02:23.040 Fetching value of define "__AVX512CD__" : 1 00:02:23.040 Fetching value of define "__AVX512DQ__" : 1 00:02:23.040 Fetching value of define "__AVX512F__" : 1 00:02:23.040 Fetching value of define "__AVX512VL__" : 1 00:02:23.040 Fetching value of define "__PCLMUL__" : 1 00:02:23.040 Fetching value of define "__RDRND__" : 1 00:02:23.040 Fetching value of define "__RDSEED__" : 1 00:02:23.040 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:23.040 Fetching value of define "__znver1__" : (undefined) 00:02:23.040 Fetching value of define "__znver2__" : (undefined) 00:02:23.040 Fetching value of define "__znver3__" : (undefined) 00:02:23.040 Fetching value of define "__znver4__" : (undefined) 00:02:23.040 Library asan found: YES 00:02:23.040 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.040 Message: lib/log: Defining dependency "log" 00:02:23.040 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.040 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.040 Library rt found: YES 00:02:23.040 Checking for function "getentropy" : NO 00:02:23.040 Message: lib/eal: Defining dependency "eal" 00:02:23.040 Message: lib/ring: Defining dependency "ring" 00:02:23.040 Message: lib/rcu: Defining dependency "rcu" 00:02:23.040 Message: lib/mempool: Defining dependency "mempool" 00:02:23.040 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.040 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.040 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.040 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.040 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.040 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.040 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:23.040 Compiler for C supports arguments -mpclmul: YES 00:02:23.040 Compiler for C supports arguments -maes: YES 00:02:23.040 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.040 Compiler for C supports arguments -mavx512bw: YES 00:02:23.040 Compiler for C supports arguments -mavx512dq: YES 00:02:23.040 Compiler for C supports arguments -mavx512vl: YES 00:02:23.040 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.040 Compiler for C supports arguments -mavx2: YES 00:02:23.040 Compiler for C supports arguments -mavx: YES 00:02:23.040 Message: lib/net: Defining dependency "net" 00:02:23.040 Message: lib/meter: Defining dependency "meter" 00:02:23.040 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.040 Message: lib/pci: Defining dependency "pci" 00:02:23.040 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.040 Message: lib/hash: Defining dependency "hash" 00:02:23.040 Message: lib/timer: Defining dependency "timer" 00:02:23.040 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.040 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.040 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.040 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.040 Message: lib/power: Defining dependency "power" 00:02:23.040 Message: lib/reorder: Defining dependency "reorder" 00:02:23.040 Message: lib/security: Defining dependency "security" 00:02:23.040 Has header "linux/userfaultfd.h" : YES 00:02:23.040 Has header "linux/vduse.h" : YES 00:02:23.040 Message: lib/vhost: Defining dependency "vhost" 00:02:23.040 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.040 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.040 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.040 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.040 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:23.040 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:23.040 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:23.040 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:23.040 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:23.040 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:23.040 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.040 Configuring doxy-api-html.conf using configuration 00:02:23.040 Configuring doxy-api-man.conf using configuration 00:02:23.040 Program mandb found: YES (/usr/bin/mandb) 00:02:23.040 Program sphinx-build found: NO 00:02:23.040 Configuring rte_build_config.h using configuration 00:02:23.040 Message: 00:02:23.040 ================= 00:02:23.040 Applications Enabled 00:02:23.040 ================= 00:02:23.040 00:02:23.040 apps: 00:02:23.040 00:02:23.040 00:02:23.040 Message: 00:02:23.040 ================= 00:02:23.040 Libraries Enabled 00:02:23.040 ================= 00:02:23.040 00:02:23.040 libs: 00:02:23.040 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.040 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:23.040 cryptodev, dmadev, power, reorder, security, vhost, 00:02:23.040 00:02:23.040 Message: 00:02:23.040 =============== 00:02:23.040 Drivers Enabled 00:02:23.040 =============== 00:02:23.040 00:02:23.040 common: 00:02:23.040 00:02:23.040 bus: 00:02:23.040 pci, vdev, 00:02:23.040 mempool: 00:02:23.040 ring, 00:02:23.040 dma: 00:02:23.040 00:02:23.040 net: 00:02:23.040 00:02:23.040 crypto: 00:02:23.040 00:02:23.040 compress: 00:02:23.040 00:02:23.040 vdpa: 00:02:23.040 00:02:23.040 00:02:23.040 Message: 00:02:23.040 ================= 00:02:23.040 Content Skipped 00:02:23.041 ================= 00:02:23.041 00:02:23.041 apps: 00:02:23.041 dumpcap: explicitly disabled via build config 00:02:23.041 graph: explicitly disabled via build config 00:02:23.041 pdump: explicitly disabled via build config 00:02:23.041 proc-info: explicitly disabled via build config 00:02:23.041 test-acl: explicitly disabled via build config 00:02:23.041 test-bbdev: explicitly disabled via build config 00:02:23.041 test-cmdline: explicitly disabled via build config 00:02:23.041 test-compress-perf: explicitly disabled via build config 00:02:23.041 test-crypto-perf: explicitly disabled via build config 00:02:23.041 test-dma-perf: explicitly disabled via build config 00:02:23.041 test-eventdev: explicitly disabled via build config 00:02:23.041 test-fib: explicitly disabled via build config 00:02:23.041 test-flow-perf: explicitly disabled via build config 00:02:23.041 test-gpudev: explicitly disabled via build config 00:02:23.041 test-mldev: explicitly disabled via build config 00:02:23.041 test-pipeline: explicitly disabled via build config 00:02:23.041 test-pmd: explicitly disabled via build config 00:02:23.041 test-regex: explicitly disabled via build config 00:02:23.041 test-sad: explicitly disabled via build config 00:02:23.041 test-security-perf: explicitly disabled via build config 00:02:23.041 00:02:23.041 libs: 00:02:23.041 argparse: explicitly disabled via build config 00:02:23.041 metrics: explicitly disabled via build config 00:02:23.041 acl: explicitly disabled via build config 00:02:23.041 bbdev: explicitly disabled via build config 00:02:23.041 bitratestats: explicitly disabled via build config 00:02:23.041 bpf: explicitly disabled via build config 00:02:23.041 cfgfile: explicitly disabled via build config 00:02:23.041 distributor: explicitly disabled via build config 00:02:23.041 efd: explicitly disabled via build config 00:02:23.041 eventdev: explicitly disabled via build config 00:02:23.041 dispatcher: explicitly disabled via build config 00:02:23.041 gpudev: explicitly disabled via build config 00:02:23.041 gro: explicitly disabled via build config 00:02:23.041 gso: explicitly disabled via build config 00:02:23.041 ip_frag: explicitly disabled via build config 00:02:23.041 jobstats: explicitly disabled via build config 00:02:23.041 latencystats: explicitly disabled via build config 00:02:23.041 lpm: explicitly disabled via build config 00:02:23.041 member: explicitly disabled via build config 00:02:23.041 pcapng: explicitly disabled via build config 00:02:23.041 rawdev: explicitly disabled via build config 00:02:23.041 regexdev: explicitly disabled via build config 00:02:23.041 mldev: explicitly disabled via build config 00:02:23.041 rib: explicitly disabled via build config 00:02:23.041 sched: explicitly disabled via build config 00:02:23.041 stack: explicitly disabled via build config 00:02:23.041 ipsec: explicitly disabled via build config 00:02:23.041 pdcp: explicitly disabled via build config 00:02:23.041 fib: explicitly disabled via build config 00:02:23.041 port: explicitly disabled via build config 00:02:23.041 pdump: explicitly disabled via build config 00:02:23.041 table: explicitly disabled via build config 00:02:23.041 pipeline: explicitly disabled via build config 00:02:23.041 graph: explicitly disabled via build config 00:02:23.041 node: explicitly disabled via build config 00:02:23.041 00:02:23.041 drivers: 00:02:23.041 common/cpt: not in enabled drivers build config 00:02:23.041 common/dpaax: not in enabled drivers build config 00:02:23.041 common/iavf: not in enabled drivers build config 00:02:23.041 common/idpf: not in enabled drivers build config 00:02:23.041 common/ionic: not in enabled drivers build config 00:02:23.041 common/mvep: not in enabled drivers build config 00:02:23.041 common/octeontx: not in enabled drivers build config 00:02:23.041 bus/auxiliary: not in enabled drivers build config 00:02:23.041 bus/cdx: not in enabled drivers build config 00:02:23.041 bus/dpaa: not in enabled drivers build config 00:02:23.041 bus/fslmc: not in enabled drivers build config 00:02:23.041 bus/ifpga: not in enabled drivers build config 00:02:23.041 bus/platform: not in enabled drivers build config 00:02:23.041 bus/uacce: not in enabled drivers build config 00:02:23.041 bus/vmbus: not in enabled drivers build config 00:02:23.041 common/cnxk: not in enabled drivers build config 00:02:23.041 common/mlx5: not in enabled drivers build config 00:02:23.041 common/nfp: not in enabled drivers build config 00:02:23.041 common/nitrox: not in enabled drivers build config 00:02:23.041 common/qat: not in enabled drivers build config 00:02:23.041 common/sfc_efx: not in enabled drivers build config 00:02:23.041 mempool/bucket: not in enabled drivers build config 00:02:23.041 mempool/cnxk: not in enabled drivers build config 00:02:23.041 mempool/dpaa: not in enabled drivers build config 00:02:23.041 mempool/dpaa2: not in enabled drivers build config 00:02:23.041 mempool/octeontx: not in enabled drivers build config 00:02:23.041 mempool/stack: not in enabled drivers build config 00:02:23.041 dma/cnxk: not in enabled drivers build config 00:02:23.041 dma/dpaa: not in enabled drivers build config 00:02:23.041 dma/dpaa2: not in enabled drivers build config 00:02:23.041 dma/hisilicon: not in enabled drivers build config 00:02:23.041 dma/idxd: not in enabled drivers build config 00:02:23.041 dma/ioat: not in enabled drivers build config 00:02:23.041 dma/skeleton: not in enabled drivers build config 00:02:23.041 net/af_packet: not in enabled drivers build config 00:02:23.041 net/af_xdp: not in enabled drivers build config 00:02:23.041 net/ark: not in enabled drivers build config 00:02:23.041 net/atlantic: not in enabled drivers build config 00:02:23.041 net/avp: not in enabled drivers build config 00:02:23.041 net/axgbe: not in enabled drivers build config 00:02:23.041 net/bnx2x: not in enabled drivers build config 00:02:23.041 net/bnxt: not in enabled drivers build config 00:02:23.041 net/bonding: not in enabled drivers build config 00:02:23.041 net/cnxk: not in enabled drivers build config 00:02:23.041 net/cpfl: not in enabled drivers build config 00:02:23.041 net/cxgbe: not in enabled drivers build config 00:02:23.041 net/dpaa: not in enabled drivers build config 00:02:23.041 net/dpaa2: not in enabled drivers build config 00:02:23.041 net/e1000: not in enabled drivers build config 00:02:23.041 net/ena: not in enabled drivers build config 00:02:23.041 net/enetc: not in enabled drivers build config 00:02:23.041 net/enetfec: not in enabled drivers build config 00:02:23.041 net/enic: not in enabled drivers build config 00:02:23.041 net/failsafe: not in enabled drivers build config 00:02:23.041 net/fm10k: not in enabled drivers build config 00:02:23.041 net/gve: not in enabled drivers build config 00:02:23.041 net/hinic: not in enabled drivers build config 00:02:23.041 net/hns3: not in enabled drivers build config 00:02:23.041 net/i40e: not in enabled drivers build config 00:02:23.041 net/iavf: not in enabled drivers build config 00:02:23.041 net/ice: not in enabled drivers build config 00:02:23.041 net/idpf: not in enabled drivers build config 00:02:23.041 net/igc: not in enabled drivers build config 00:02:23.041 net/ionic: not in enabled drivers build config 00:02:23.041 net/ipn3ke: not in enabled drivers build config 00:02:23.041 net/ixgbe: not in enabled drivers build config 00:02:23.041 net/mana: not in enabled drivers build config 00:02:23.041 net/memif: not in enabled drivers build config 00:02:23.041 net/mlx4: not in enabled drivers build config 00:02:23.041 net/mlx5: not in enabled drivers build config 00:02:23.041 net/mvneta: not in enabled drivers build config 00:02:23.041 net/mvpp2: not in enabled drivers build config 00:02:23.041 net/netvsc: not in enabled drivers build config 00:02:23.041 net/nfb: not in enabled drivers build config 00:02:23.041 net/nfp: not in enabled drivers build config 00:02:23.041 net/ngbe: not in enabled drivers build config 00:02:23.041 net/null: not in enabled drivers build config 00:02:23.041 net/octeontx: not in enabled drivers build config 00:02:23.041 net/octeon_ep: not in enabled drivers build config 00:02:23.041 net/pcap: not in enabled drivers build config 00:02:23.041 net/pfe: not in enabled drivers build config 00:02:23.041 net/qede: not in enabled drivers build config 00:02:23.041 net/ring: not in enabled drivers build config 00:02:23.041 net/sfc: not in enabled drivers build config 00:02:23.041 net/softnic: not in enabled drivers build config 00:02:23.041 net/tap: not in enabled drivers build config 00:02:23.041 net/thunderx: not in enabled drivers build config 00:02:23.041 net/txgbe: not in enabled drivers build config 00:02:23.041 net/vdev_netvsc: not in enabled drivers build config 00:02:23.041 net/vhost: not in enabled drivers build config 00:02:23.041 net/virtio: not in enabled drivers build config 00:02:23.041 net/vmxnet3: not in enabled drivers build config 00:02:23.041 raw/*: missing internal dependency, "rawdev" 00:02:23.041 crypto/armv8: not in enabled drivers build config 00:02:23.041 crypto/bcmfs: not in enabled drivers build config 00:02:23.041 crypto/caam_jr: not in enabled drivers build config 00:02:23.041 crypto/ccp: not in enabled drivers build config 00:02:23.041 crypto/cnxk: not in enabled drivers build config 00:02:23.041 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.041 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.041 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.041 crypto/mlx5: not in enabled drivers build config 00:02:23.041 crypto/mvsam: not in enabled drivers build config 00:02:23.041 crypto/nitrox: not in enabled drivers build config 00:02:23.041 crypto/null: not in enabled drivers build config 00:02:23.041 crypto/octeontx: not in enabled drivers build config 00:02:23.041 crypto/openssl: not in enabled drivers build config 00:02:23.041 crypto/scheduler: not in enabled drivers build config 00:02:23.041 crypto/uadk: not in enabled drivers build config 00:02:23.041 crypto/virtio: not in enabled drivers build config 00:02:23.041 compress/isal: not in enabled drivers build config 00:02:23.041 compress/mlx5: not in enabled drivers build config 00:02:23.041 compress/nitrox: not in enabled drivers build config 00:02:23.041 compress/octeontx: not in enabled drivers build config 00:02:23.042 compress/zlib: not in enabled drivers build config 00:02:23.042 regex/*: missing internal dependency, "regexdev" 00:02:23.042 ml/*: missing internal dependency, "mldev" 00:02:23.042 vdpa/ifc: not in enabled drivers build config 00:02:23.042 vdpa/mlx5: not in enabled drivers build config 00:02:23.042 vdpa/nfp: not in enabled drivers build config 00:02:23.042 vdpa/sfc: not in enabled drivers build config 00:02:23.042 event/*: missing internal dependency, "eventdev" 00:02:23.042 baseband/*: missing internal dependency, "bbdev" 00:02:23.042 gpu/*: missing internal dependency, "gpudev" 00:02:23.042 00:02:23.042 00:02:23.042 Build targets in project: 84 00:02:23.042 00:02:23.042 DPDK 24.03.0 00:02:23.042 00:02:23.042 User defined options 00:02:23.042 buildtype : debug 00:02:23.042 default_library : shared 00:02:23.042 libdir : lib 00:02:23.042 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:23.042 b_sanitize : address 00:02:23.042 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:23.042 c_link_args : 00:02:23.042 cpu_instruction_set: native 00:02:23.042 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:23.042 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:23.042 enable_docs : false 00:02:23.042 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:23.042 enable_kmods : false 00:02:23.042 max_lcores : 128 00:02:23.042 tests : false 00:02:23.042 00:02:23.042 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.300 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:23.300 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:23.300 [2/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:23.300 [3/267] Linking static target lib/librte_log.a 00:02:23.300 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.300 [5/267] Linking static target lib/librte_kvargs.a 00:02:23.300 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.559 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:23.559 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.559 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.559 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:23.559 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:23.559 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.559 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.818 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:23.818 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.818 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:23.818 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:23.818 [18/267] Linking static target lib/librte_telemetry.a 00:02:24.076 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.076 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.076 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.076 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.076 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.076 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.076 [25/267] Linking target lib/librte_log.so.24.1 00:02:24.076 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:24.076 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.335 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.335 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:24.335 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:24.335 [31/267] Linking target lib/librte_kvargs.so.24.1 00:02:24.594 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.594 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.594 [34/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:24.594 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.594 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.594 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:24.594 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.594 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:24.594 [40/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.594 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:24.594 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:24.594 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:24.594 [44/267] Linking target lib/librte_telemetry.so.24.1 00:02:24.594 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.852 [46/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:24.852 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.852 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.852 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:25.110 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:25.110 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:25.110 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:25.110 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:25.110 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:25.110 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:25.110 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:25.369 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:25.369 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:25.369 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:25.369 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:25.369 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:25.369 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:25.369 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:25.369 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:25.629 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:25.629 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:25.629 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:25.629 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:25.924 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:25.924 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:25.924 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:25.924 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:25.925 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:25.925 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:25.925 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:25.925 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:25.925 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:25.925 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:26.183 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:26.183 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:26.183 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:26.183 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:26.183 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:26.183 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:26.183 [85/267] Linking static target lib/librte_eal.a 00:02:26.441 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:26.441 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:26.441 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:26.441 [89/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:26.441 [90/267] Linking static target lib/librte_ring.a 00:02:26.441 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:26.441 [92/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:26.441 [93/267] Linking static target lib/librte_rcu.a 00:02:26.441 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:26.699 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:26.699 [96/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:26.699 [97/267] Linking static target lib/librte_mempool.a 00:02:26.699 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:26.699 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:26.699 [100/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:26.957 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:26.957 [102/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.957 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.957 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:26.957 [105/267] Linking static target lib/librte_meter.a 00:02:26.957 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:27.215 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:27.215 [108/267] Linking static target lib/librte_mbuf.a 00:02:27.215 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:27.215 [110/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:27.215 [111/267] Linking static target lib/librte_net.a 00:02:27.215 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:27.473 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:27.473 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.473 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:27.473 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.473 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:27.473 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.732 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:27.732 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:27.990 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.990 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:27.990 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:27.990 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:27.990 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:28.249 [126/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:28.249 [127/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:28.249 [128/267] Linking static target lib/librte_pci.a 00:02:28.249 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:28.249 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:28.249 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:28.508 [132/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.508 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:28.508 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:28.508 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:28.508 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:28.508 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:28.508 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:28.508 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:28.508 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:28.508 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:28.508 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:28.508 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:28.508 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:28.508 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:28.508 [146/267] Linking static target lib/librte_cmdline.a 00:02:28.766 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:28.766 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.766 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:28.766 [150/267] Linking static target lib/librte_ethdev.a 00:02:28.766 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:29.025 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:29.025 [153/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:29.025 [154/267] Linking static target lib/librte_timer.a 00:02:29.025 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:29.025 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:29.283 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:29.283 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:29.283 [159/267] Linking static target lib/librte_compressdev.a 00:02:29.283 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:29.283 [161/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.283 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:29.283 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:29.541 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.541 [165/267] Linking static target lib/librte_hash.a 00:02:29.541 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:29.541 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:29.541 [168/267] Linking static target lib/librte_dmadev.a 00:02:29.799 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:29.799 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:29.799 [171/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.799 [172/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:29.799 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:29.799 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.058 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:30.058 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:30.058 [177/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.058 [178/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:30.058 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:30.058 [180/267] Linking static target lib/librte_cryptodev.a 00:02:30.058 [181/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.058 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:30.316 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.316 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.316 [185/267] Linking static target lib/librte_power.a 00:02:30.316 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:30.316 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:30.574 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:30.574 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:30.574 [190/267] Linking static target lib/librte_security.a 00:02:30.574 [191/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:30.574 [192/267] Linking static target lib/librte_reorder.a 00:02:30.833 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:31.090 [194/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.090 [195/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.091 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.091 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:31.091 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:31.091 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:31.349 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:31.349 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:31.349 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:31.607 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:31.607 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:31.607 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:31.865 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:31.865 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:31.865 [208/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:31.865 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:31.865 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.865 [211/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.124 [212/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:32.124 [213/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.124 [214/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.124 [215/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.124 [216/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.124 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.124 [218/267] Linking static target drivers/librte_bus_vdev.a 00:02:32.124 [219/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.124 [220/267] Linking static target drivers/librte_bus_pci.a 00:02:32.124 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:32.124 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.124 [223/267] Linking static target drivers/librte_mempool_ring.a 00:02:32.124 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.382 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.382 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.640 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:33.573 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.573 [229/267] Linking target lib/librte_eal.so.24.1 00:02:33.573 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:33.573 [231/267] Linking target lib/librte_pci.so.24.1 00:02:33.573 [232/267] Linking target lib/librte_ring.so.24.1 00:02:33.573 [233/267] Linking target lib/librte_meter.so.24.1 00:02:33.573 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:33.573 [235/267] Linking target lib/librte_timer.so.24.1 00:02:33.573 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:33.573 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:33.573 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:33.831 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:33.831 [240/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:33.831 [241/267] Linking target lib/librte_rcu.so.24.1 00:02:33.831 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:33.831 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:33.831 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:33.831 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:33.831 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:33.831 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:33.831 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:34.089 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:34.089 [250/267] Linking target lib/librte_reorder.so.24.1 00:02:34.089 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:34.089 [252/267] Linking target lib/librte_net.so.24.1 00:02:34.089 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:34.089 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:34.089 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:34.089 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:34.089 [257/267] Linking target lib/librte_security.so.24.1 00:02:34.089 [258/267] Linking target lib/librte_hash.so.24.1 00:02:34.347 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.347 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:34.347 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:34.347 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:34.605 [263/267] Linking target lib/librte_power.so.24.1 00:02:35.171 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:35.171 [265/267] Linking static target lib/librte_vhost.a 00:02:36.565 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.566 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:36.566 INFO: autodetecting backend as ninja 00:02:36.566 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:51.429 CC lib/ut_mock/mock.o 00:02:51.429 CC lib/log/log.o 00:02:51.429 CC lib/log/log_deprecated.o 00:02:51.429 CC lib/log/log_flags.o 00:02:51.429 CC lib/ut/ut.o 00:02:51.429 LIB libspdk_ut_mock.a 00:02:51.429 LIB libspdk_ut.a 00:02:51.429 LIB libspdk_log.a 00:02:51.429 SO libspdk_ut_mock.so.6.0 00:02:51.429 SO libspdk_ut.so.2.0 00:02:51.429 SO libspdk_log.so.7.1 00:02:51.429 SYMLINK libspdk_ut_mock.so 00:02:51.429 SYMLINK libspdk_ut.so 00:02:51.429 SYMLINK libspdk_log.so 00:02:51.429 CC lib/util/base64.o 00:02:51.429 CC lib/util/bit_array.o 00:02:51.429 CC lib/util/cpuset.o 00:02:51.429 CC lib/util/crc32.o 00:02:51.429 CC lib/util/crc16.o 00:02:51.429 CC lib/ioat/ioat.o 00:02:51.429 CC lib/util/crc32c.o 00:02:51.429 CC lib/dma/dma.o 00:02:51.429 CXX lib/trace_parser/trace.o 00:02:51.429 CC lib/util/crc32_ieee.o 00:02:51.429 CC lib/vfio_user/host/vfio_user_pci.o 00:02:51.429 CC lib/util/crc64.o 00:02:51.429 CC lib/vfio_user/host/vfio_user.o 00:02:51.429 CC lib/util/dif.o 00:02:51.429 LIB libspdk_dma.a 00:02:51.429 CC lib/util/fd.o 00:02:51.429 SO libspdk_dma.so.5.0 00:02:51.429 LIB libspdk_ioat.a 00:02:51.429 CC lib/util/fd_group.o 00:02:51.429 CC lib/util/file.o 00:02:51.429 CC lib/util/hexlify.o 00:02:51.429 SO libspdk_ioat.so.7.0 00:02:51.429 SYMLINK libspdk_dma.so 00:02:51.429 CC lib/util/iov.o 00:02:51.429 SYMLINK libspdk_ioat.so 00:02:51.429 CC lib/util/math.o 00:02:51.429 CC lib/util/net.o 00:02:51.429 CC lib/util/pipe.o 00:02:51.429 LIB libspdk_vfio_user.a 00:02:51.429 SO libspdk_vfio_user.so.5.0 00:02:51.429 CC lib/util/strerror_tls.o 00:02:51.429 CC lib/util/string.o 00:02:51.429 CC lib/util/uuid.o 00:02:51.429 SYMLINK libspdk_vfio_user.so 00:02:51.429 CC lib/util/xor.o 00:02:51.429 CC lib/util/zipf.o 00:02:51.429 CC lib/util/md5.o 00:02:51.429 LIB libspdk_util.a 00:02:51.429 SO libspdk_util.so.10.1 00:02:51.429 LIB libspdk_trace_parser.a 00:02:51.429 SO libspdk_trace_parser.so.6.0 00:02:51.429 SYMLINK libspdk_util.so 00:02:51.429 SYMLINK libspdk_trace_parser.so 00:02:51.429 CC lib/json/json_parse.o 00:02:51.429 CC lib/json/json_util.o 00:02:51.429 CC lib/conf/conf.o 00:02:51.429 CC lib/json/json_write.o 00:02:51.429 CC lib/idxd/idxd.o 00:02:51.429 CC lib/idxd/idxd_user.o 00:02:51.429 CC lib/idxd/idxd_kernel.o 00:02:51.429 CC lib/rdma_utils/rdma_utils.o 00:02:51.429 CC lib/env_dpdk/env.o 00:02:51.429 CC lib/vmd/vmd.o 00:02:51.429 CC lib/env_dpdk/memory.o 00:02:51.429 LIB libspdk_conf.a 00:02:51.429 CC lib/env_dpdk/pci.o 00:02:51.429 SO libspdk_conf.so.6.0 00:02:51.429 LIB libspdk_rdma_utils.a 00:02:51.429 SO libspdk_rdma_utils.so.1.0 00:02:51.429 SYMLINK libspdk_conf.so 00:02:51.429 CC lib/vmd/led.o 00:02:51.429 CC lib/env_dpdk/init.o 00:02:51.429 CC lib/env_dpdk/threads.o 00:02:51.429 LIB libspdk_json.a 00:02:51.429 SYMLINK libspdk_rdma_utils.so 00:02:51.429 SO libspdk_json.so.6.0 00:02:51.429 SYMLINK libspdk_json.so 00:02:51.429 CC lib/env_dpdk/pci_ioat.o 00:02:51.429 CC lib/env_dpdk/pci_virtio.o 00:02:51.429 CC lib/env_dpdk/pci_vmd.o 00:02:51.429 CC lib/rdma_provider/common.o 00:02:51.429 CC lib/env_dpdk/pci_idxd.o 00:02:51.429 CC lib/env_dpdk/pci_event.o 00:02:51.429 CC lib/env_dpdk/sigbus_handler.o 00:02:51.429 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:51.429 LIB libspdk_idxd.a 00:02:51.429 CC lib/env_dpdk/pci_dpdk.o 00:02:51.429 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:51.429 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:51.429 SO libspdk_idxd.so.12.1 00:02:51.429 LIB libspdk_vmd.a 00:02:51.429 SO libspdk_vmd.so.6.0 00:02:51.429 SYMLINK libspdk_idxd.so 00:02:51.429 SYMLINK libspdk_vmd.so 00:02:51.429 LIB libspdk_rdma_provider.a 00:02:51.429 SO libspdk_rdma_provider.so.7.0 00:02:51.429 CC lib/jsonrpc/jsonrpc_server.o 00:02:51.429 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:51.429 CC lib/jsonrpc/jsonrpc_client.o 00:02:51.429 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:51.429 SYMLINK libspdk_rdma_provider.so 00:02:51.429 LIB libspdk_jsonrpc.a 00:02:51.687 SO libspdk_jsonrpc.so.6.0 00:02:51.687 SYMLINK libspdk_jsonrpc.so 00:02:51.946 LIB libspdk_env_dpdk.a 00:02:51.946 CC lib/rpc/rpc.o 00:02:51.946 SO libspdk_env_dpdk.so.15.1 00:02:51.946 SYMLINK libspdk_env_dpdk.so 00:02:52.205 LIB libspdk_rpc.a 00:02:52.205 SO libspdk_rpc.so.6.0 00:02:52.205 SYMLINK libspdk_rpc.so 00:02:52.463 CC lib/trace/trace.o 00:02:52.463 CC lib/trace/trace_rpc.o 00:02:52.463 CC lib/trace/trace_flags.o 00:02:52.463 CC lib/notify/notify_rpc.o 00:02:52.463 CC lib/notify/notify.o 00:02:52.463 CC lib/keyring/keyring_rpc.o 00:02:52.463 CC lib/keyring/keyring.o 00:02:52.463 LIB libspdk_notify.a 00:02:52.463 SO libspdk_notify.so.6.0 00:02:52.463 LIB libspdk_trace.a 00:02:52.463 SYMLINK libspdk_notify.so 00:02:52.463 LIB libspdk_keyring.a 00:02:52.463 SO libspdk_trace.so.11.0 00:02:52.721 SO libspdk_keyring.so.2.0 00:02:52.722 SYMLINK libspdk_trace.so 00:02:52.722 SYMLINK libspdk_keyring.so 00:02:52.980 CC lib/thread/thread.o 00:02:52.980 CC lib/sock/sock.o 00:02:52.980 CC lib/thread/iobuf.o 00:02:52.980 CC lib/sock/sock_rpc.o 00:02:53.238 LIB libspdk_sock.a 00:02:53.238 SO libspdk_sock.so.10.0 00:02:53.496 SYMLINK libspdk_sock.so 00:02:53.754 CC lib/nvme/nvme_ctrlr.o 00:02:53.754 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:53.754 CC lib/nvme/nvme_fabric.o 00:02:53.754 CC lib/nvme/nvme_ns_cmd.o 00:02:53.754 CC lib/nvme/nvme_pcie.o 00:02:53.754 CC lib/nvme/nvme_ns.o 00:02:53.754 CC lib/nvme/nvme_pcie_common.o 00:02:53.754 CC lib/nvme/nvme.o 00:02:53.754 CC lib/nvme/nvme_qpair.o 00:02:54.011 LIB libspdk_thread.a 00:02:54.011 SO libspdk_thread.so.11.0 00:02:54.011 SYMLINK libspdk_thread.so 00:02:54.011 CC lib/nvme/nvme_quirks.o 00:02:54.269 CC lib/nvme/nvme_transport.o 00:02:54.269 CC lib/nvme/nvme_discovery.o 00:02:54.269 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:54.269 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:54.527 CC lib/nvme/nvme_tcp.o 00:02:54.527 CC lib/nvme/nvme_opal.o 00:02:54.527 CC lib/nvme/nvme_io_msg.o 00:02:54.527 CC lib/nvme/nvme_poll_group.o 00:02:54.527 CC lib/nvme/nvme_zns.o 00:02:54.785 CC lib/nvme/nvme_stubs.o 00:02:54.785 CC lib/nvme/nvme_auth.o 00:02:54.785 CC lib/accel/accel.o 00:02:55.042 CC lib/accel/accel_rpc.o 00:02:55.042 CC lib/blob/blobstore.o 00:02:55.042 CC lib/init/json_config.o 00:02:55.042 CC lib/init/subsystem.o 00:02:55.042 CC lib/init/subsystem_rpc.o 00:02:55.042 CC lib/init/rpc.o 00:02:55.301 CC lib/accel/accel_sw.o 00:02:55.301 CC lib/blob/request.o 00:02:55.301 CC lib/blob/zeroes.o 00:02:55.301 CC lib/blob/blob_bs_dev.o 00:02:55.301 LIB libspdk_init.a 00:02:55.301 SO libspdk_init.so.6.0 00:02:55.301 SYMLINK libspdk_init.so 00:02:55.558 CC lib/nvme/nvme_cuse.o 00:02:55.558 CC lib/virtio/virtio.o 00:02:55.558 CC lib/virtio/virtio_vhost_user.o 00:02:55.558 CC lib/nvme/nvme_rdma.o 00:02:55.558 CC lib/fsdev/fsdev.o 00:02:55.816 CC lib/event/app.o 00:02:55.816 CC lib/event/reactor.o 00:02:55.816 CC lib/virtio/virtio_vfio_user.o 00:02:55.816 CC lib/virtio/virtio_pci.o 00:02:55.816 CC lib/fsdev/fsdev_io.o 00:02:56.077 LIB libspdk_accel.a 00:02:56.077 CC lib/event/log_rpc.o 00:02:56.077 SO libspdk_accel.so.16.0 00:02:56.077 CC lib/fsdev/fsdev_rpc.o 00:02:56.077 SYMLINK libspdk_accel.so 00:02:56.077 CC lib/event/app_rpc.o 00:02:56.077 CC lib/event/scheduler_static.o 00:02:56.077 LIB libspdk_virtio.a 00:02:56.077 SO libspdk_virtio.so.7.0 00:02:56.077 LIB libspdk_fsdev.a 00:02:56.337 SO libspdk_fsdev.so.2.0 00:02:56.338 SYMLINK libspdk_virtio.so 00:02:56.338 SYMLINK libspdk_fsdev.so 00:02:56.338 CC lib/bdev/bdev.o 00:02:56.338 CC lib/bdev/bdev_zone.o 00:02:56.338 CC lib/bdev/bdev_rpc.o 00:02:56.338 CC lib/bdev/scsi_nvme.o 00:02:56.338 CC lib/bdev/part.o 00:02:56.338 LIB libspdk_event.a 00:02:56.338 SO libspdk_event.so.14.0 00:02:56.338 SYMLINK libspdk_event.so 00:02:56.338 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:56.902 LIB libspdk_nvme.a 00:02:56.902 LIB libspdk_fuse_dispatcher.a 00:02:56.902 SO libspdk_fuse_dispatcher.so.1.0 00:02:56.902 SO libspdk_nvme.so.15.0 00:02:56.902 SYMLINK libspdk_fuse_dispatcher.so 00:02:57.160 SYMLINK libspdk_nvme.so 00:02:58.533 LIB libspdk_blob.a 00:02:58.533 SO libspdk_blob.so.12.0 00:02:58.533 SYMLINK libspdk_blob.so 00:02:58.533 CC lib/lvol/lvol.o 00:02:58.533 CC lib/blobfs/blobfs.o 00:02:58.533 CC lib/blobfs/tree.o 00:02:59.098 LIB libspdk_bdev.a 00:02:59.098 SO libspdk_bdev.so.17.0 00:02:59.098 SYMLINK libspdk_bdev.so 00:02:59.373 CC lib/scsi/dev.o 00:02:59.373 CC lib/scsi/port.o 00:02:59.373 CC lib/scsi/lun.o 00:02:59.373 CC lib/scsi/scsi.o 00:02:59.373 CC lib/nvmf/ctrlr.o 00:02:59.373 CC lib/nbd/nbd.o 00:02:59.373 CC lib/ftl/ftl_core.o 00:02:59.373 CC lib/ublk/ublk.o 00:02:59.373 LIB libspdk_lvol.a 00:02:59.373 CC lib/ftl/ftl_init.o 00:02:59.373 CC lib/scsi/scsi_bdev.o 00:02:59.373 SO libspdk_lvol.so.11.0 00:02:59.373 LIB libspdk_blobfs.a 00:02:59.631 SO libspdk_blobfs.so.11.0 00:02:59.631 SYMLINK libspdk_lvol.so 00:02:59.631 CC lib/ftl/ftl_layout.o 00:02:59.631 CC lib/ftl/ftl_debug.o 00:02:59.631 CC lib/ftl/ftl_io.o 00:02:59.631 SYMLINK libspdk_blobfs.so 00:02:59.631 CC lib/ftl/ftl_sb.o 00:02:59.631 CC lib/ftl/ftl_l2p.o 00:02:59.631 CC lib/ftl/ftl_l2p_flat.o 00:02:59.631 CC lib/nbd/nbd_rpc.o 00:02:59.631 CC lib/ftl/ftl_nv_cache.o 00:02:59.888 CC lib/ftl/ftl_band.o 00:02:59.888 CC lib/ftl/ftl_band_ops.o 00:02:59.888 CC lib/ftl/ftl_writer.o 00:02:59.888 CC lib/ftl/ftl_rq.o 00:02:59.888 LIB libspdk_nbd.a 00:02:59.888 CC lib/ftl/ftl_reloc.o 00:02:59.888 SO libspdk_nbd.so.7.0 00:02:59.888 CC lib/scsi/scsi_pr.o 00:02:59.889 SYMLINK libspdk_nbd.so 00:02:59.889 CC lib/ftl/ftl_l2p_cache.o 00:02:59.889 CC lib/ublk/ublk_rpc.o 00:02:59.889 CC lib/nvmf/ctrlr_discovery.o 00:03:00.146 CC lib/ftl/ftl_p2l.o 00:03:00.146 CC lib/ftl/ftl_p2l_log.o 00:03:00.146 LIB libspdk_ublk.a 00:03:00.146 SO libspdk_ublk.so.3.0 00:03:00.146 CC lib/nvmf/ctrlr_bdev.o 00:03:00.146 SYMLINK libspdk_ublk.so 00:03:00.146 CC lib/scsi/scsi_rpc.o 00:03:00.146 CC lib/ftl/mngt/ftl_mngt.o 00:03:00.403 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:00.403 CC lib/scsi/task.o 00:03:00.403 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:00.403 CC lib/nvmf/subsystem.o 00:03:00.403 CC lib/nvmf/nvmf.o 00:03:00.403 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:00.403 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:00.661 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:00.661 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:00.661 LIB libspdk_scsi.a 00:03:00.661 SO libspdk_scsi.so.9.0 00:03:00.661 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:00.661 SYMLINK libspdk_scsi.so 00:03:00.661 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:00.661 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:00.661 CC lib/nvmf/nvmf_rpc.o 00:03:00.661 CC lib/nvmf/transport.o 00:03:00.919 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:00.919 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:00.919 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:00.919 CC lib/nvmf/tcp.o 00:03:00.919 CC lib/nvmf/stubs.o 00:03:00.919 CC lib/nvmf/mdns_server.o 00:03:01.176 CC lib/ftl/utils/ftl_conf.o 00:03:01.176 CC lib/ftl/utils/ftl_md.o 00:03:01.176 CC lib/ftl/utils/ftl_mempool.o 00:03:01.176 CC lib/nvmf/rdma.o 00:03:01.434 CC lib/nvmf/auth.o 00:03:01.434 CC lib/ftl/utils/ftl_bitmap.o 00:03:01.434 CC lib/ftl/utils/ftl_property.o 00:03:01.434 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:01.434 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:01.434 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:01.434 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:01.693 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:01.693 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:01.693 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:01.693 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:01.693 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:01.693 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:01.693 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:01.693 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:01.693 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:01.693 CC lib/ftl/base/ftl_base_dev.o 00:03:01.950 CC lib/ftl/base/ftl_base_bdev.o 00:03:01.950 CC lib/ftl/ftl_trace.o 00:03:01.950 CC lib/iscsi/init_grp.o 00:03:01.950 CC lib/iscsi/conn.o 00:03:01.950 CC lib/iscsi/param.o 00:03:01.950 CC lib/iscsi/iscsi.o 00:03:01.950 CC lib/iscsi/portal_grp.o 00:03:01.951 CC lib/iscsi/tgt_node.o 00:03:01.951 LIB libspdk_ftl.a 00:03:01.951 CC lib/vhost/vhost.o 00:03:02.208 CC lib/iscsi/iscsi_subsystem.o 00:03:02.208 SO libspdk_ftl.so.9.0 00:03:02.208 CC lib/iscsi/iscsi_rpc.o 00:03:02.466 CC lib/iscsi/task.o 00:03:02.466 CC lib/vhost/vhost_rpc.o 00:03:02.466 SYMLINK libspdk_ftl.so 00:03:02.466 CC lib/vhost/vhost_scsi.o 00:03:02.466 CC lib/vhost/vhost_blk.o 00:03:02.466 CC lib/vhost/rte_vhost_user.o 00:03:03.400 LIB libspdk_iscsi.a 00:03:03.400 SO libspdk_iscsi.so.8.0 00:03:03.400 LIB libspdk_nvmf.a 00:03:03.400 LIB libspdk_vhost.a 00:03:03.400 SYMLINK libspdk_iscsi.so 00:03:03.400 SO libspdk_nvmf.so.20.0 00:03:03.400 SO libspdk_vhost.so.8.0 00:03:03.658 SYMLINK libspdk_vhost.so 00:03:03.658 SYMLINK libspdk_nvmf.so 00:03:03.916 CC module/env_dpdk/env_dpdk_rpc.o 00:03:03.916 CC module/accel/ioat/accel_ioat.o 00:03:03.917 CC module/keyring/linux/keyring.o 00:03:03.917 CC module/keyring/file/keyring.o 00:03:03.917 CC module/accel/error/accel_error.o 00:03:03.917 CC module/fsdev/aio/fsdev_aio.o 00:03:03.917 CC module/accel/dsa/accel_dsa.o 00:03:03.917 CC module/sock/posix/posix.o 00:03:03.917 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:03.917 CC module/blob/bdev/blob_bdev.o 00:03:03.917 LIB libspdk_env_dpdk_rpc.a 00:03:04.175 SO libspdk_env_dpdk_rpc.so.6.0 00:03:04.175 CC module/keyring/linux/keyring_rpc.o 00:03:04.175 SYMLINK libspdk_env_dpdk_rpc.so 00:03:04.175 CC module/keyring/file/keyring_rpc.o 00:03:04.175 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:04.175 CC module/accel/ioat/accel_ioat_rpc.o 00:03:04.175 LIB libspdk_scheduler_dynamic.a 00:03:04.175 SO libspdk_scheduler_dynamic.so.4.0 00:03:04.175 CC module/accel/error/accel_error_rpc.o 00:03:04.175 LIB libspdk_keyring_linux.a 00:03:04.175 CC module/accel/dsa/accel_dsa_rpc.o 00:03:04.175 SO libspdk_keyring_linux.so.1.0 00:03:04.175 LIB libspdk_keyring_file.a 00:03:04.175 LIB libspdk_accel_ioat.a 00:03:04.175 SYMLINK libspdk_scheduler_dynamic.so 00:03:04.175 CC module/fsdev/aio/linux_aio_mgr.o 00:03:04.175 SO libspdk_keyring_file.so.2.0 00:03:04.175 SO libspdk_accel_ioat.so.6.0 00:03:04.175 LIB libspdk_blob_bdev.a 00:03:04.175 SYMLINK libspdk_keyring_linux.so 00:03:04.175 SO libspdk_blob_bdev.so.12.0 00:03:04.433 SYMLINK libspdk_keyring_file.so 00:03:04.433 SYMLINK libspdk_accel_ioat.so 00:03:04.433 LIB libspdk_accel_error.a 00:03:04.433 LIB libspdk_accel_dsa.a 00:03:04.433 SO libspdk_accel_error.so.2.0 00:03:04.433 SYMLINK libspdk_blob_bdev.so 00:03:04.433 SO libspdk_accel_dsa.so.5.0 00:03:04.433 SYMLINK libspdk_accel_error.so 00:03:04.433 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:04.433 SYMLINK libspdk_accel_dsa.so 00:03:04.433 CC module/accel/iaa/accel_iaa.o 00:03:04.433 CC module/scheduler/gscheduler/gscheduler.o 00:03:04.433 LIB libspdk_scheduler_dpdk_governor.a 00:03:04.433 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:04.691 CC module/bdev/gpt/gpt.o 00:03:04.691 CC module/bdev/error/vbdev_error.o 00:03:04.691 LIB libspdk_fsdev_aio.a 00:03:04.691 CC module/bdev/lvol/vbdev_lvol.o 00:03:04.691 CC module/bdev/delay/vbdev_delay.o 00:03:04.691 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.691 LIB libspdk_scheduler_gscheduler.a 00:03:04.691 SO libspdk_fsdev_aio.so.1.0 00:03:04.691 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:04.691 CC module/accel/iaa/accel_iaa_rpc.o 00:03:04.691 CC module/bdev/error/vbdev_error_rpc.o 00:03:04.691 SO libspdk_scheduler_gscheduler.so.4.0 00:03:04.691 SYMLINK libspdk_fsdev_aio.so 00:03:04.691 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.691 SYMLINK libspdk_scheduler_gscheduler.so 00:03:04.691 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.691 LIB libspdk_accel_iaa.a 00:03:04.691 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:04.691 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.691 LIB libspdk_sock_posix.a 00:03:04.691 SO libspdk_accel_iaa.so.3.0 00:03:04.691 SO libspdk_sock_posix.so.6.0 00:03:04.691 LIB libspdk_bdev_error.a 00:03:04.949 SYMLINK libspdk_accel_iaa.so 00:03:04.949 SO libspdk_bdev_error.so.6.0 00:03:04.949 SYMLINK libspdk_sock_posix.so 00:03:04.949 LIB libspdk_blobfs_bdev.a 00:03:04.949 SYMLINK libspdk_bdev_error.so 00:03:04.949 SO libspdk_blobfs_bdev.so.6.0 00:03:04.949 LIB libspdk_bdev_delay.a 00:03:04.949 CC module/bdev/malloc/bdev_malloc.o 00:03:04.949 LIB libspdk_bdev_gpt.a 00:03:04.949 SO libspdk_bdev_delay.so.6.0 00:03:04.949 SYMLINK libspdk_blobfs_bdev.so 00:03:04.949 SO libspdk_bdev_gpt.so.6.0 00:03:04.949 CC module/bdev/null/bdev_null.o 00:03:04.949 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.949 SYMLINK libspdk_bdev_delay.so 00:03:04.949 CC module/bdev/nvme/bdev_nvme.o 00:03:04.949 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.949 SYMLINK libspdk_bdev_gpt.so 00:03:04.949 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.949 CC module/bdev/raid/bdev_raid.o 00:03:05.207 CC module/bdev/split/vbdev_split.o 00:03:05.207 CC module/bdev/nvme/nvme_rpc.o 00:03:05.207 CC module/bdev/nvme/bdev_mdns_client.o 00:03:05.207 LIB libspdk_bdev_lvol.a 00:03:05.207 SO libspdk_bdev_lvol.so.6.0 00:03:05.207 CC module/bdev/null/bdev_null_rpc.o 00:03:05.207 SYMLINK libspdk_bdev_lvol.so 00:03:05.207 CC module/bdev/nvme/vbdev_opal.o 00:03:05.207 LIB libspdk_bdev_passthru.a 00:03:05.207 CC module/bdev/split/vbdev_split_rpc.o 00:03:05.207 SO libspdk_bdev_passthru.so.6.0 00:03:05.207 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:05.207 SYMLINK libspdk_bdev_passthru.so 00:03:05.465 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:05.465 LIB libspdk_bdev_split.a 00:03:05.465 LIB libspdk_bdev_null.a 00:03:05.465 SO libspdk_bdev_split.so.6.0 00:03:05.465 SO libspdk_bdev_null.so.6.0 00:03:05.465 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:05.465 LIB libspdk_bdev_malloc.a 00:03:05.465 SYMLINK libspdk_bdev_null.so 00:03:05.465 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:05.465 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:05.465 SYMLINK libspdk_bdev_split.so 00:03:05.465 SO libspdk_bdev_malloc.so.6.0 00:03:05.465 CC module/bdev/xnvme/bdev_xnvme.o 00:03:05.465 SYMLINK libspdk_bdev_malloc.so 00:03:05.465 CC module/bdev/raid/bdev_raid_rpc.o 00:03:05.465 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:05.723 CC module/bdev/aio/bdev_aio.o 00:03:05.723 CC module/bdev/ftl/bdev_ftl.o 00:03:05.723 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:05.723 LIB libspdk_bdev_zone_block.a 00:03:05.723 CC module/bdev/raid/bdev_raid_sb.o 00:03:05.723 SO libspdk_bdev_zone_block.so.6.0 00:03:05.723 CC module/bdev/aio/bdev_aio_rpc.o 00:03:05.723 CC module/bdev/iscsi/bdev_iscsi.o 00:03:05.723 LIB libspdk_bdev_xnvme.a 00:03:05.723 SO libspdk_bdev_xnvme.so.3.0 00:03:05.723 SYMLINK libspdk_bdev_zone_block.so 00:03:05.723 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:05.723 SYMLINK libspdk_bdev_xnvme.so 00:03:05.980 CC module/bdev/raid/raid0.o 00:03:05.980 LIB libspdk_bdev_aio.a 00:03:05.980 CC module/bdev/raid/raid1.o 00:03:05.980 SO libspdk_bdev_aio.so.6.0 00:03:05.980 LIB libspdk_bdev_ftl.a 00:03:05.980 CC module/bdev/raid/concat.o 00:03:05.980 SYMLINK libspdk_bdev_aio.so 00:03:05.981 SO libspdk_bdev_ftl.so.6.0 00:03:05.981 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:05.981 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:05.981 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:05.981 SYMLINK libspdk_bdev_ftl.so 00:03:05.981 LIB libspdk_bdev_iscsi.a 00:03:06.265 SO libspdk_bdev_iscsi.so.6.0 00:03:06.265 LIB libspdk_bdev_raid.a 00:03:06.265 SYMLINK libspdk_bdev_iscsi.so 00:03:06.265 SO libspdk_bdev_raid.so.6.0 00:03:06.265 SYMLINK libspdk_bdev_raid.so 00:03:06.523 LIB libspdk_bdev_virtio.a 00:03:06.523 SO libspdk_bdev_virtio.so.6.0 00:03:06.523 SYMLINK libspdk_bdev_virtio.so 00:03:07.457 LIB libspdk_bdev_nvme.a 00:03:07.457 SO libspdk_bdev_nvme.so.7.1 00:03:07.715 SYMLINK libspdk_bdev_nvme.so 00:03:07.973 CC module/event/subsystems/sock/sock.o 00:03:07.973 CC module/event/subsystems/vmd/vmd.o 00:03:07.973 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:07.973 CC module/event/subsystems/scheduler/scheduler.o 00:03:07.973 CC module/event/subsystems/iobuf/iobuf.o 00:03:07.973 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:07.973 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:07.973 CC module/event/subsystems/keyring/keyring.o 00:03:07.974 CC module/event/subsystems/fsdev/fsdev.o 00:03:08.232 LIB libspdk_event_vhost_blk.a 00:03:08.232 LIB libspdk_event_iobuf.a 00:03:08.232 LIB libspdk_event_keyring.a 00:03:08.232 LIB libspdk_event_sock.a 00:03:08.232 LIB libspdk_event_scheduler.a 00:03:08.232 SO libspdk_event_vhost_blk.so.3.0 00:03:08.232 LIB libspdk_event_fsdev.a 00:03:08.232 LIB libspdk_event_vmd.a 00:03:08.232 SO libspdk_event_keyring.so.1.0 00:03:08.232 SO libspdk_event_iobuf.so.3.0 00:03:08.232 SO libspdk_event_sock.so.5.0 00:03:08.232 SO libspdk_event_scheduler.so.4.0 00:03:08.232 SO libspdk_event_fsdev.so.1.0 00:03:08.232 SO libspdk_event_vmd.so.6.0 00:03:08.232 SYMLINK libspdk_event_vhost_blk.so 00:03:08.232 SYMLINK libspdk_event_keyring.so 00:03:08.232 SYMLINK libspdk_event_sock.so 00:03:08.232 SYMLINK libspdk_event_iobuf.so 00:03:08.232 SYMLINK libspdk_event_fsdev.so 00:03:08.232 SYMLINK libspdk_event_scheduler.so 00:03:08.232 SYMLINK libspdk_event_vmd.so 00:03:08.491 CC module/event/subsystems/accel/accel.o 00:03:08.491 LIB libspdk_event_accel.a 00:03:08.749 SO libspdk_event_accel.so.6.0 00:03:08.749 SYMLINK libspdk_event_accel.so 00:03:09.007 CC module/event/subsystems/bdev/bdev.o 00:03:09.007 LIB libspdk_event_bdev.a 00:03:09.007 SO libspdk_event_bdev.so.6.0 00:03:09.265 SYMLINK libspdk_event_bdev.so 00:03:09.265 CC module/event/subsystems/ublk/ublk.o 00:03:09.265 CC module/event/subsystems/nbd/nbd.o 00:03:09.265 CC module/event/subsystems/scsi/scsi.o 00:03:09.266 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:09.266 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:09.524 LIB libspdk_event_nbd.a 00:03:09.524 LIB libspdk_event_ublk.a 00:03:09.524 SO libspdk_event_nbd.so.6.0 00:03:09.524 SO libspdk_event_ublk.so.3.0 00:03:09.524 LIB libspdk_event_scsi.a 00:03:09.524 SO libspdk_event_scsi.so.6.0 00:03:09.524 SYMLINK libspdk_event_ublk.so 00:03:09.524 SYMLINK libspdk_event_nbd.so 00:03:09.524 LIB libspdk_event_nvmf.a 00:03:09.524 SYMLINK libspdk_event_scsi.so 00:03:09.524 SO libspdk_event_nvmf.so.6.0 00:03:09.524 SYMLINK libspdk_event_nvmf.so 00:03:09.783 CC module/event/subsystems/iscsi/iscsi.o 00:03:09.783 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:09.783 LIB libspdk_event_vhost_scsi.a 00:03:09.783 LIB libspdk_event_iscsi.a 00:03:09.783 SO libspdk_event_vhost_scsi.so.3.0 00:03:09.783 SO libspdk_event_iscsi.so.6.0 00:03:10.042 SYMLINK libspdk_event_vhost_scsi.so 00:03:10.042 SYMLINK libspdk_event_iscsi.so 00:03:10.042 SO libspdk.so.6.0 00:03:10.042 SYMLINK libspdk.so 00:03:10.301 CXX app/trace/trace.o 00:03:10.301 CC app/spdk_lspci/spdk_lspci.o 00:03:10.301 CC app/trace_record/trace_record.o 00:03:10.301 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:10.301 CC app/iscsi_tgt/iscsi_tgt.o 00:03:10.301 CC app/nvmf_tgt/nvmf_main.o 00:03:10.301 CC examples/util/zipf/zipf.o 00:03:10.301 CC examples/ioat/perf/perf.o 00:03:10.301 CC app/spdk_tgt/spdk_tgt.o 00:03:10.301 CC test/thread/poller_perf/poller_perf.o 00:03:10.301 LINK spdk_lspci 00:03:10.301 LINK interrupt_tgt 00:03:10.301 LINK poller_perf 00:03:10.301 LINK spdk_trace_record 00:03:10.301 LINK nvmf_tgt 00:03:10.301 LINK zipf 00:03:10.559 LINK iscsi_tgt 00:03:10.559 LINK ioat_perf 00:03:10.559 LINK spdk_trace 00:03:10.559 LINK spdk_tgt 00:03:10.559 CC app/spdk_nvme_perf/perf.o 00:03:10.559 CC app/spdk_nvme_discover/discovery_aer.o 00:03:10.559 CC app/spdk_nvme_identify/identify.o 00:03:10.559 CC examples/ioat/verify/verify.o 00:03:10.817 CC examples/sock/hello_world/hello_sock.o 00:03:10.817 TEST_HEADER include/spdk/accel.h 00:03:10.817 TEST_HEADER include/spdk/accel_module.h 00:03:10.817 TEST_HEADER include/spdk/assert.h 00:03:10.817 TEST_HEADER include/spdk/barrier.h 00:03:10.817 CC test/dma/test_dma/test_dma.o 00:03:10.817 TEST_HEADER include/spdk/base64.h 00:03:10.817 TEST_HEADER include/spdk/bdev.h 00:03:10.817 TEST_HEADER include/spdk/bdev_module.h 00:03:10.817 TEST_HEADER include/spdk/bdev_zone.h 00:03:10.817 TEST_HEADER include/spdk/bit_array.h 00:03:10.817 TEST_HEADER include/spdk/bit_pool.h 00:03:10.817 TEST_HEADER include/spdk/blob_bdev.h 00:03:10.817 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:10.817 TEST_HEADER include/spdk/blobfs.h 00:03:10.817 TEST_HEADER include/spdk/blob.h 00:03:10.817 TEST_HEADER include/spdk/conf.h 00:03:10.817 TEST_HEADER include/spdk/config.h 00:03:10.817 CC examples/thread/thread/thread_ex.o 00:03:10.817 TEST_HEADER include/spdk/cpuset.h 00:03:10.817 TEST_HEADER include/spdk/crc16.h 00:03:10.817 TEST_HEADER include/spdk/crc32.h 00:03:10.817 TEST_HEADER include/spdk/crc64.h 00:03:10.817 TEST_HEADER include/spdk/dif.h 00:03:10.817 CC examples/vmd/lsvmd/lsvmd.o 00:03:10.817 TEST_HEADER include/spdk/dma.h 00:03:10.817 TEST_HEADER include/spdk/endian.h 00:03:10.817 TEST_HEADER include/spdk/env_dpdk.h 00:03:10.817 LINK spdk_nvme_discover 00:03:10.817 TEST_HEADER include/spdk/env.h 00:03:10.817 TEST_HEADER include/spdk/event.h 00:03:10.817 TEST_HEADER include/spdk/fd_group.h 00:03:10.817 TEST_HEADER include/spdk/fd.h 00:03:10.817 TEST_HEADER include/spdk/file.h 00:03:10.817 TEST_HEADER include/spdk/fsdev.h 00:03:10.817 TEST_HEADER include/spdk/fsdev_module.h 00:03:10.817 TEST_HEADER include/spdk/ftl.h 00:03:10.817 TEST_HEADER include/spdk/gpt_spec.h 00:03:10.817 TEST_HEADER include/spdk/hexlify.h 00:03:10.817 TEST_HEADER include/spdk/histogram_data.h 00:03:10.817 TEST_HEADER include/spdk/idxd.h 00:03:10.817 TEST_HEADER include/spdk/idxd_spec.h 00:03:10.817 TEST_HEADER include/spdk/init.h 00:03:10.817 TEST_HEADER include/spdk/ioat.h 00:03:10.818 TEST_HEADER include/spdk/ioat_spec.h 00:03:10.818 TEST_HEADER include/spdk/iscsi_spec.h 00:03:10.818 TEST_HEADER include/spdk/json.h 00:03:10.818 TEST_HEADER include/spdk/jsonrpc.h 00:03:10.818 TEST_HEADER include/spdk/keyring.h 00:03:10.818 TEST_HEADER include/spdk/keyring_module.h 00:03:10.818 TEST_HEADER include/spdk/likely.h 00:03:10.818 TEST_HEADER include/spdk/log.h 00:03:10.818 TEST_HEADER include/spdk/lvol.h 00:03:10.818 TEST_HEADER include/spdk/md5.h 00:03:10.818 TEST_HEADER include/spdk/memory.h 00:03:10.818 TEST_HEADER include/spdk/mmio.h 00:03:10.818 TEST_HEADER include/spdk/nbd.h 00:03:10.818 TEST_HEADER include/spdk/net.h 00:03:10.818 TEST_HEADER include/spdk/notify.h 00:03:10.818 TEST_HEADER include/spdk/nvme.h 00:03:10.818 CC test/app/bdev_svc/bdev_svc.o 00:03:10.818 TEST_HEADER include/spdk/nvme_intel.h 00:03:10.818 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:10.818 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:10.818 TEST_HEADER include/spdk/nvme_spec.h 00:03:10.818 TEST_HEADER include/spdk/nvme_zns.h 00:03:10.818 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:10.818 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:10.818 TEST_HEADER include/spdk/nvmf.h 00:03:10.818 LINK verify 00:03:10.818 TEST_HEADER include/spdk/nvmf_spec.h 00:03:10.818 TEST_HEADER include/spdk/nvmf_transport.h 00:03:10.818 TEST_HEADER include/spdk/opal.h 00:03:10.818 TEST_HEADER include/spdk/opal_spec.h 00:03:10.818 TEST_HEADER include/spdk/pci_ids.h 00:03:10.818 TEST_HEADER include/spdk/pipe.h 00:03:10.818 TEST_HEADER include/spdk/queue.h 00:03:10.818 TEST_HEADER include/spdk/reduce.h 00:03:10.818 TEST_HEADER include/spdk/rpc.h 00:03:10.818 TEST_HEADER include/spdk/scheduler.h 00:03:10.818 TEST_HEADER include/spdk/scsi.h 00:03:10.818 TEST_HEADER include/spdk/scsi_spec.h 00:03:10.818 LINK lsvmd 00:03:10.818 TEST_HEADER include/spdk/sock.h 00:03:10.818 TEST_HEADER include/spdk/stdinc.h 00:03:10.818 TEST_HEADER include/spdk/string.h 00:03:10.818 TEST_HEADER include/spdk/thread.h 00:03:10.818 TEST_HEADER include/spdk/trace.h 00:03:10.818 TEST_HEADER include/spdk/trace_parser.h 00:03:10.818 TEST_HEADER include/spdk/tree.h 00:03:10.818 TEST_HEADER include/spdk/ublk.h 00:03:10.818 TEST_HEADER include/spdk/util.h 00:03:10.818 TEST_HEADER include/spdk/uuid.h 00:03:10.818 TEST_HEADER include/spdk/version.h 00:03:10.818 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:10.818 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:10.818 TEST_HEADER include/spdk/vhost.h 00:03:10.818 TEST_HEADER include/spdk/vmd.h 00:03:10.818 TEST_HEADER include/spdk/xor.h 00:03:10.818 TEST_HEADER include/spdk/zipf.h 00:03:10.818 CXX test/cpp_headers/accel.o 00:03:10.818 CXX test/cpp_headers/accel_module.o 00:03:10.818 LINK hello_sock 00:03:10.818 CXX test/cpp_headers/assert.o 00:03:11.077 LINK bdev_svc 00:03:11.077 LINK thread 00:03:11.077 CC examples/vmd/led/led.o 00:03:11.077 CXX test/cpp_headers/barrier.o 00:03:11.077 CC app/spdk_top/spdk_top.o 00:03:11.077 LINK test_dma 00:03:11.077 CXX test/cpp_headers/base64.o 00:03:11.077 LINK led 00:03:11.077 CC test/event/event_perf/event_perf.o 00:03:11.077 CC test/env/mem_callbacks/mem_callbacks.o 00:03:11.077 CC test/app/histogram_perf/histogram_perf.o 00:03:11.337 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:11.337 LINK spdk_nvme_perf 00:03:11.337 LINK event_perf 00:03:11.337 CXX test/cpp_headers/bdev.o 00:03:11.337 LINK histogram_perf 00:03:11.337 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:11.337 LINK spdk_nvme_identify 00:03:11.337 CC examples/idxd/perf/perf.o 00:03:11.337 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:11.337 CC test/event/reactor/reactor.o 00:03:11.337 CXX test/cpp_headers/bdev_module.o 00:03:11.595 CC test/rpc_client/rpc_client_test.o 00:03:11.595 LINK nvme_fuzz 00:03:11.595 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:11.595 LINK reactor 00:03:11.595 LINK rpc_client_test 00:03:11.595 CXX test/cpp_headers/bdev_zone.o 00:03:11.595 LINK mem_callbacks 00:03:11.853 LINK idxd_perf 00:03:11.853 CC test/event/reactor_perf/reactor_perf.o 00:03:11.853 CXX test/cpp_headers/bit_array.o 00:03:11.853 LINK spdk_top 00:03:11.853 CC test/accel/dif/dif.o 00:03:11.853 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:11.853 CC test/env/vtophys/vtophys.o 00:03:11.853 CC examples/accel/perf/accel_perf.o 00:03:11.853 LINK reactor_perf 00:03:11.853 LINK vtophys 00:03:11.853 CXX test/cpp_headers/bit_pool.o 00:03:11.853 LINK vhost_fuzz 00:03:12.110 CC app/vhost/vhost.o 00:03:12.110 LINK hello_fsdev 00:03:12.110 CC test/blobfs/mkfs/mkfs.o 00:03:12.110 CXX test/cpp_headers/blob_bdev.o 00:03:12.110 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:12.110 CC test/event/app_repeat/app_repeat.o 00:03:12.110 LINK vhost 00:03:12.110 LINK mkfs 00:03:12.110 CXX test/cpp_headers/blobfs_bdev.o 00:03:12.368 CC examples/blob/hello_world/hello_blob.o 00:03:12.368 LINK app_repeat 00:03:12.368 CC examples/blob/cli/blobcli.o 00:03:12.368 LINK env_dpdk_post_init 00:03:12.368 LINK accel_perf 00:03:12.368 LINK dif 00:03:12.368 CXX test/cpp_headers/blobfs.o 00:03:12.368 CC app/spdk_dd/spdk_dd.o 00:03:12.368 LINK hello_blob 00:03:12.368 CXX test/cpp_headers/blob.o 00:03:12.368 CC test/env/memory/memory_ut.o 00:03:12.628 CXX test/cpp_headers/conf.o 00:03:12.628 CC test/event/scheduler/scheduler.o 00:03:12.628 CC test/env/pci/pci_ut.o 00:03:12.628 CC test/lvol/esnap/esnap.o 00:03:12.628 CXX test/cpp_headers/config.o 00:03:12.628 LINK iscsi_fuzz 00:03:12.628 CXX test/cpp_headers/cpuset.o 00:03:12.628 LINK blobcli 00:03:12.628 CC test/nvme/aer/aer.o 00:03:12.628 CC test/nvme/reset/reset.o 00:03:12.628 LINK spdk_dd 00:03:12.628 LINK scheduler 00:03:12.886 CXX test/cpp_headers/crc16.o 00:03:12.886 CC test/app/jsoncat/jsoncat.o 00:03:12.886 LINK reset 00:03:12.886 LINK pci_ut 00:03:12.886 CXX test/cpp_headers/crc32.o 00:03:12.886 CC examples/nvme/hello_world/hello_world.o 00:03:12.886 LINK aer 00:03:12.886 LINK jsoncat 00:03:12.886 CC app/fio/nvme/fio_plugin.o 00:03:13.145 CXX test/cpp_headers/crc64.o 00:03:13.145 CC examples/bdev/hello_world/hello_bdev.o 00:03:13.145 CC app/fio/bdev/fio_plugin.o 00:03:13.145 CC test/app/stub/stub.o 00:03:13.145 CC test/nvme/sgl/sgl.o 00:03:13.145 CXX test/cpp_headers/dif.o 00:03:13.145 CC test/nvme/e2edp/nvme_dp.o 00:03:13.145 LINK hello_world 00:03:13.145 CXX test/cpp_headers/dma.o 00:03:13.145 LINK hello_bdev 00:03:13.145 LINK stub 00:03:13.402 LINK memory_ut 00:03:13.402 CXX test/cpp_headers/endian.o 00:03:13.402 CXX test/cpp_headers/env_dpdk.o 00:03:13.402 CC examples/nvme/reconnect/reconnect.o 00:03:13.402 LINK sgl 00:03:13.402 LINK nvme_dp 00:03:13.402 LINK spdk_nvme 00:03:13.402 CC examples/bdev/bdevperf/bdevperf.o 00:03:13.402 CXX test/cpp_headers/env.o 00:03:13.659 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:13.659 CC examples/nvme/arbitration/arbitration.o 00:03:13.659 LINK spdk_bdev 00:03:13.659 CC test/nvme/overhead/overhead.o 00:03:13.659 CC test/bdev/bdevio/bdevio.o 00:03:13.659 CC examples/nvme/hotplug/hotplug.o 00:03:13.660 CXX test/cpp_headers/event.o 00:03:13.660 CXX test/cpp_headers/fd_group.o 00:03:13.660 LINK reconnect 00:03:13.660 CXX test/cpp_headers/fd.o 00:03:13.917 LINK hotplug 00:03:13.917 LINK arbitration 00:03:13.917 LINK overhead 00:03:13.917 CC test/nvme/err_injection/err_injection.o 00:03:13.917 CXX test/cpp_headers/file.o 00:03:13.917 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:13.917 CC examples/nvme/abort/abort.o 00:03:13.917 CC test/nvme/startup/startup.o 00:03:13.917 LINK err_injection 00:03:13.917 LINK bdevio 00:03:13.917 CXX test/cpp_headers/fsdev.o 00:03:13.917 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:13.917 LINK nvme_manage 00:03:14.175 LINK cmb_copy 00:03:14.175 CXX test/cpp_headers/fsdev_module.o 00:03:14.175 LINK bdevperf 00:03:14.175 CXX test/cpp_headers/ftl.o 00:03:14.175 LINK startup 00:03:14.175 LINK pmr_persistence 00:03:14.175 CC test/nvme/reserve/reserve.o 00:03:14.175 CXX test/cpp_headers/gpt_spec.o 00:03:14.175 CXX test/cpp_headers/hexlify.o 00:03:14.175 LINK abort 00:03:14.175 CXX test/cpp_headers/histogram_data.o 00:03:14.175 CXX test/cpp_headers/idxd.o 00:03:14.175 CC test/nvme/simple_copy/simple_copy.o 00:03:14.175 CC test/nvme/connect_stress/connect_stress.o 00:03:14.433 CXX test/cpp_headers/idxd_spec.o 00:03:14.433 CC test/nvme/boot_partition/boot_partition.o 00:03:14.433 CC test/nvme/compliance/nvme_compliance.o 00:03:14.433 LINK reserve 00:03:14.433 CXX test/cpp_headers/init.o 00:03:14.433 CXX test/cpp_headers/ioat.o 00:03:14.433 CXX test/cpp_headers/ioat_spec.o 00:03:14.433 LINK simple_copy 00:03:14.433 LINK connect_stress 00:03:14.433 LINK boot_partition 00:03:14.433 CXX test/cpp_headers/iscsi_spec.o 00:03:14.433 CC examples/nvmf/nvmf/nvmf.o 00:03:14.433 CXX test/cpp_headers/json.o 00:03:14.433 CXX test/cpp_headers/jsonrpc.o 00:03:14.433 CC test/nvme/fused_ordering/fused_ordering.o 00:03:14.691 CXX test/cpp_headers/keyring.o 00:03:14.691 LINK nvme_compliance 00:03:14.691 CXX test/cpp_headers/keyring_module.o 00:03:14.691 CXX test/cpp_headers/likely.o 00:03:14.691 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:14.691 CC test/nvme/cuse/cuse.o 00:03:14.691 CC test/nvme/fdp/fdp.o 00:03:14.691 LINK fused_ordering 00:03:14.691 CXX test/cpp_headers/log.o 00:03:14.691 CXX test/cpp_headers/lvol.o 00:03:14.691 CXX test/cpp_headers/md5.o 00:03:14.691 LINK nvmf 00:03:14.691 CXX test/cpp_headers/memory.o 00:03:14.691 CXX test/cpp_headers/mmio.o 00:03:14.950 LINK doorbell_aers 00:03:14.950 CXX test/cpp_headers/nbd.o 00:03:14.950 CXX test/cpp_headers/net.o 00:03:14.950 CXX test/cpp_headers/notify.o 00:03:14.950 CXX test/cpp_headers/nvme.o 00:03:14.950 CXX test/cpp_headers/nvme_intel.o 00:03:14.950 CXX test/cpp_headers/nvme_ocssd.o 00:03:14.950 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:14.950 CXX test/cpp_headers/nvme_spec.o 00:03:14.950 CXX test/cpp_headers/nvme_zns.o 00:03:14.950 CXX test/cpp_headers/nvmf_cmd.o 00:03:14.950 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:14.950 CXX test/cpp_headers/nvmf.o 00:03:14.950 LINK fdp 00:03:14.950 CXX test/cpp_headers/nvmf_spec.o 00:03:14.950 CXX test/cpp_headers/nvmf_transport.o 00:03:15.208 CXX test/cpp_headers/opal.o 00:03:15.208 CXX test/cpp_headers/opal_spec.o 00:03:15.208 CXX test/cpp_headers/pci_ids.o 00:03:15.208 CXX test/cpp_headers/pipe.o 00:03:15.208 CXX test/cpp_headers/queue.o 00:03:15.208 CXX test/cpp_headers/reduce.o 00:03:15.208 CXX test/cpp_headers/rpc.o 00:03:15.208 CXX test/cpp_headers/scheduler.o 00:03:15.208 CXX test/cpp_headers/scsi.o 00:03:15.208 CXX test/cpp_headers/scsi_spec.o 00:03:15.208 CXX test/cpp_headers/sock.o 00:03:15.208 CXX test/cpp_headers/stdinc.o 00:03:15.208 CXX test/cpp_headers/string.o 00:03:15.208 CXX test/cpp_headers/thread.o 00:03:15.208 CXX test/cpp_headers/trace.o 00:03:15.208 CXX test/cpp_headers/trace_parser.o 00:03:15.208 CXX test/cpp_headers/tree.o 00:03:15.466 CXX test/cpp_headers/ublk.o 00:03:15.466 CXX test/cpp_headers/util.o 00:03:15.466 CXX test/cpp_headers/uuid.o 00:03:15.466 CXX test/cpp_headers/version.o 00:03:15.466 CXX test/cpp_headers/vfio_user_pci.o 00:03:15.466 CXX test/cpp_headers/vfio_user_spec.o 00:03:15.466 CXX test/cpp_headers/vhost.o 00:03:15.466 CXX test/cpp_headers/vmd.o 00:03:15.466 CXX test/cpp_headers/xor.o 00:03:15.466 CXX test/cpp_headers/zipf.o 00:03:15.725 LINK cuse 00:03:16.659 LINK esnap 00:03:16.917 00:03:16.917 real 1m5.003s 00:03:16.917 user 6m4.690s 00:03:16.917 sys 1m6.273s 00:03:16.917 ************************************ 00:03:16.917 END TEST make 00:03:16.917 ************************************ 00:03:16.917 22:43:56 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:16.917 22:43:56 make -- common/autotest_common.sh@10 -- $ set +x 00:03:16.917 22:43:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:16.917 22:43:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:16.917 22:43:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:16.917 22:43:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.917 22:43:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:17.175 22:43:56 -- pm/common@44 -- $ pid=5065 00:03:17.175 22:43:56 -- pm/common@50 -- $ kill -TERM 5065 00:03:17.175 22:43:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.175 22:43:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:17.175 22:43:56 -- pm/common@44 -- $ pid=5066 00:03:17.175 22:43:56 -- pm/common@50 -- $ kill -TERM 5066 00:03:17.175 22:43:56 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:17.175 22:43:56 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:17.175 22:43:56 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:17.175 22:43:56 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:17.175 22:43:56 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:17.175 22:43:56 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:17.175 22:43:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:17.175 22:43:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:17.175 22:43:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:17.175 22:43:56 -- scripts/common.sh@336 -- # IFS=.-: 00:03:17.175 22:43:56 -- scripts/common.sh@336 -- # read -ra ver1 00:03:17.175 22:43:56 -- scripts/common.sh@337 -- # IFS=.-: 00:03:17.175 22:43:56 -- scripts/common.sh@337 -- # read -ra ver2 00:03:17.175 22:43:56 -- scripts/common.sh@338 -- # local 'op=<' 00:03:17.175 22:43:56 -- scripts/common.sh@340 -- # ver1_l=2 00:03:17.175 22:43:56 -- scripts/common.sh@341 -- # ver2_l=1 00:03:17.175 22:43:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:17.175 22:43:56 -- scripts/common.sh@344 -- # case "$op" in 00:03:17.175 22:43:56 -- scripts/common.sh@345 -- # : 1 00:03:17.175 22:43:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:17.175 22:43:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:17.175 22:43:56 -- scripts/common.sh@365 -- # decimal 1 00:03:17.175 22:43:56 -- scripts/common.sh@353 -- # local d=1 00:03:17.175 22:43:56 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:17.175 22:43:56 -- scripts/common.sh@355 -- # echo 1 00:03:17.175 22:43:56 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:17.175 22:43:56 -- scripts/common.sh@366 -- # decimal 2 00:03:17.175 22:43:56 -- scripts/common.sh@353 -- # local d=2 00:03:17.175 22:43:56 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:17.175 22:43:56 -- scripts/common.sh@355 -- # echo 2 00:03:17.175 22:43:56 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:17.175 22:43:56 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:17.175 22:43:56 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:17.175 22:43:56 -- scripts/common.sh@368 -- # return 0 00:03:17.175 22:43:56 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:17.175 22:43:56 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.175 --rc genhtml_branch_coverage=1 00:03:17.176 --rc genhtml_function_coverage=1 00:03:17.176 --rc genhtml_legend=1 00:03:17.176 --rc geninfo_all_blocks=1 00:03:17.176 --rc geninfo_unexecuted_blocks=1 00:03:17.176 00:03:17.176 ' 00:03:17.176 22:43:56 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:17.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.176 --rc genhtml_branch_coverage=1 00:03:17.176 --rc genhtml_function_coverage=1 00:03:17.176 --rc genhtml_legend=1 00:03:17.176 --rc geninfo_all_blocks=1 00:03:17.176 --rc geninfo_unexecuted_blocks=1 00:03:17.176 00:03:17.176 ' 00:03:17.176 22:43:56 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:17.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.176 --rc genhtml_branch_coverage=1 00:03:17.176 --rc genhtml_function_coverage=1 00:03:17.176 --rc genhtml_legend=1 00:03:17.176 --rc geninfo_all_blocks=1 00:03:17.176 --rc geninfo_unexecuted_blocks=1 00:03:17.176 00:03:17.176 ' 00:03:17.176 22:43:56 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:17.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.176 --rc genhtml_branch_coverage=1 00:03:17.176 --rc genhtml_function_coverage=1 00:03:17.176 --rc genhtml_legend=1 00:03:17.176 --rc geninfo_all_blocks=1 00:03:17.176 --rc geninfo_unexecuted_blocks=1 00:03:17.176 00:03:17.176 ' 00:03:17.176 22:43:56 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:17.176 22:43:56 -- nvmf/common.sh@7 -- # uname -s 00:03:17.176 22:43:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:17.176 22:43:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:17.176 22:43:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:17.176 22:43:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:17.176 22:43:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:17.176 22:43:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:17.176 22:43:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:17.176 22:43:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:17.176 22:43:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:17.176 22:43:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:17.176 22:43:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ffb3371-6748-4348-a0e1-15b4033c0cdb 00:03:17.176 22:43:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=1ffb3371-6748-4348-a0e1-15b4033c0cdb 00:03:17.176 22:43:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:17.176 22:43:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:17.176 22:43:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:17.176 22:43:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:17.176 22:43:56 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:17.176 22:43:56 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:17.176 22:43:56 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:17.176 22:43:56 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:17.176 22:43:56 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:17.176 22:43:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.176 22:43:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.176 22:43:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.176 22:43:56 -- paths/export.sh@5 -- # export PATH 00:03:17.176 22:43:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.176 22:43:56 -- nvmf/common.sh@51 -- # : 0 00:03:17.176 22:43:56 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:17.176 22:43:56 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:17.176 22:43:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:17.176 22:43:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:17.176 22:43:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:17.176 22:43:56 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:17.176 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:17.176 22:43:56 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:17.176 22:43:56 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:17.176 22:43:56 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:17.176 22:43:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:17.176 22:43:56 -- spdk/autotest.sh@32 -- # uname -s 00:03:17.176 22:43:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:17.176 22:43:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:17.176 22:43:56 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:17.176 22:43:56 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:17.176 22:43:56 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:17.176 22:43:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:17.176 22:43:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:17.176 22:43:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:17.176 22:43:56 -- spdk/autotest.sh@48 -- # udevadm_pid=56001 00:03:17.176 22:43:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:17.176 22:43:56 -- pm/common@17 -- # local monitor 00:03:17.176 22:43:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.176 22:43:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.176 22:43:56 -- pm/common@25 -- # sleep 1 00:03:17.176 22:43:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:17.176 22:43:56 -- pm/common@21 -- # date +%s 00:03:17.176 22:43:56 -- pm/common@21 -- # date +%s 00:03:17.176 22:43:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734129836 00:03:17.176 22:43:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734129836 00:03:17.176 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734129836_collect-cpu-load.pm.log 00:03:17.176 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734129836_collect-vmstat.pm.log 00:03:18.551 22:43:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:18.551 22:43:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:18.551 22:43:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:18.551 22:43:57 -- common/autotest_common.sh@10 -- # set +x 00:03:18.551 22:43:57 -- spdk/autotest.sh@59 -- # create_test_list 00:03:18.551 22:43:57 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:18.551 22:43:57 -- common/autotest_common.sh@10 -- # set +x 00:03:18.551 22:43:57 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:18.551 22:43:57 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:18.551 22:43:57 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:18.551 22:43:57 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:18.551 22:43:57 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:18.551 22:43:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:18.551 22:43:57 -- common/autotest_common.sh@1457 -- # uname 00:03:18.551 22:43:57 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:18.551 22:43:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:18.551 22:43:57 -- common/autotest_common.sh@1477 -- # uname 00:03:18.551 22:43:57 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:18.551 22:43:57 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:18.551 22:43:57 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:18.551 lcov: LCOV version 1.15 00:03:18.551 22:43:57 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:33.461 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:33.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:48.350 22:44:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:48.350 22:44:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.350 22:44:25 -- common/autotest_common.sh@10 -- # set +x 00:03:48.350 22:44:25 -- spdk/autotest.sh@78 -- # rm -f 00:03:48.350 22:44:25 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.350 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:48.350 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:48.350 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:48.350 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:48.350 22:44:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:48.351 22:44:26 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:48.351 22:44:26 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:48.351 22:44:26 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:48.351 22:44:26 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:48.351 22:44:26 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:48.351 22:44:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:48.351 22:44:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:48.351 22:44:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:48.351 22:44:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:48.351 22:44:26 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:48.351 22:44:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:48.351 22:44:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:48.351 22:44:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:48.351 22:44:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:48.351 22:44:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:48.351 22:44:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:48.351 22:44:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:48.351 22:44:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:48.351 22:44:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:48.351 22:44:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:48.351 22:44:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:03:48.351 22:44:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:48.351 22:44:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:03:48.351 22:44:26 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:48.351 22:44:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:48.351 22:44:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:48.351 22:44:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:48.351 22:44:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:03:48.351 22:44:26 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:03:48.351 22:44:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:48.351 22:44:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:48.351 22:44:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:48.351 22:44:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:03:48.351 22:44:26 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:03:48.351 22:44:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:48.351 22:44:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:48.351 22:44:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:48.351 22:44:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:03:48.351 22:44:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:48.351 22:44:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:03:48.351 22:44:26 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:48.351 22:44:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:48.351 22:44:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:48.351 22:44:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:48.351 22:44:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.351 22:44:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.351 22:44:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:48.351 22:44:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:48.351 22:44:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:48.351 No valid GPT data, bailing 00:03:48.351 22:44:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:48.351 22:44:26 -- scripts/common.sh@394 -- # pt= 00:03:48.351 22:44:26 -- scripts/common.sh@395 -- # return 1 00:03:48.351 22:44:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:48.351 1+0 records in 00:03:48.351 1+0 records out 00:03:48.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295932 s, 35.4 MB/s 00:03:48.351 22:44:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.351 22:44:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.351 22:44:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:48.351 22:44:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:48.351 22:44:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:48.351 No valid GPT data, bailing 00:03:48.351 22:44:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:48.351 22:44:27 -- scripts/common.sh@394 -- # pt= 00:03:48.351 22:44:27 -- scripts/common.sh@395 -- # return 1 00:03:48.351 22:44:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:48.351 1+0 records in 00:03:48.351 1+0 records out 00:03:48.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00576136 s, 182 MB/s 00:03:48.351 22:44:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.351 22:44:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.351 22:44:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:48.351 22:44:27 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:48.351 22:44:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:48.351 No valid GPT data, bailing 00:03:48.351 22:44:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:48.351 22:44:27 -- scripts/common.sh@394 -- # pt= 00:03:48.351 22:44:27 -- scripts/common.sh@395 -- # return 1 00:03:48.351 22:44:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:48.351 1+0 records in 00:03:48.351 1+0 records out 00:03:48.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043304 s, 242 MB/s 00:03:48.351 22:44:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.351 22:44:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.351 22:44:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:03:48.351 22:44:27 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:03:48.351 22:44:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:48.351 No valid GPT data, bailing 00:03:48.351 22:44:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:48.351 22:44:27 -- scripts/common.sh@394 -- # pt= 00:03:48.351 22:44:27 -- scripts/common.sh@395 -- # return 1 00:03:48.351 22:44:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:48.351 1+0 records in 00:03:48.351 1+0 records out 00:03:48.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00395384 s, 265 MB/s 00:03:48.351 22:44:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.351 22:44:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.351 22:44:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:03:48.351 22:44:27 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:03:48.351 22:44:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:48.351 No valid GPT data, bailing 00:03:48.351 22:44:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:48.351 22:44:27 -- scripts/common.sh@394 -- # pt= 00:03:48.351 22:44:27 -- scripts/common.sh@395 -- # return 1 00:03:48.351 22:44:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:48.351 1+0 records in 00:03:48.351 1+0 records out 00:03:48.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536674 s, 195 MB/s 00:03:48.351 22:44:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.351 22:44:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.351 22:44:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:48.351 22:44:27 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:48.351 22:44:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:48.351 No valid GPT data, bailing 00:03:48.351 22:44:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:48.351 22:44:27 -- scripts/common.sh@394 -- # pt= 00:03:48.351 22:44:27 -- scripts/common.sh@395 -- # return 1 00:03:48.351 22:44:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:48.351 1+0 records in 00:03:48.351 1+0 records out 00:03:48.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00383339 s, 274 MB/s 00:03:48.351 22:44:27 -- spdk/autotest.sh@105 -- # sync 00:03:48.922 22:44:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:48.922 22:44:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:48.922 22:44:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:50.304 22:44:29 -- spdk/autotest.sh@111 -- # uname -s 00:03:50.304 22:44:29 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:50.304 22:44:29 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:50.304 22:44:29 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:50.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.134 Hugepages 00:03:51.134 node hugesize free / total 00:03:51.134 node0 1048576kB 0 / 0 00:03:51.134 node0 2048kB 0 / 0 00:03:51.134 00:03:51.134 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.395 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:51.395 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:51.395 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:51.395 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:51.656 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:51.656 22:44:30 -- spdk/autotest.sh@117 -- # uname -s 00:03:51.656 22:44:30 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:51.656 22:44:30 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:51.656 22:44:30 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:51.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.863 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.863 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.863 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.863 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.863 22:44:31 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:53.824 22:44:32 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:53.824 22:44:32 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:53.824 22:44:32 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:53.824 22:44:32 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:53.824 22:44:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:53.824 22:44:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:53.824 22:44:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.824 22:44:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:53.824 22:44:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:53.824 22:44:32 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:53.824 22:44:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:53.824 22:44:32 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:54.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.348 Waiting for block devices as requested 00:03:54.348 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:54.348 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:54.348 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:54.609 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:59.898 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:59.898 22:44:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:59.898 22:44:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:59.898 22:44:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:59.898 22:44:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:59.898 22:44:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:59.898 22:44:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:59.898 22:44:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:59.898 22:44:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:59.898 22:44:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:59.898 22:44:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:59.898 22:44:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:59.898 22:44:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:59.898 22:44:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:59.898 22:44:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:59.898 22:44:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:59.898 22:44:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:59.898 22:44:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:59.898 22:44:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:59.898 22:44:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:59.898 22:44:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:59.898 22:44:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:59.898 22:44:38 -- common/autotest_common.sh@1543 -- # continue 00:03:59.898 22:44:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:59.898 22:44:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:59.898 22:44:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:59.898 22:44:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:59.899 22:44:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:59.899 22:44:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:59.899 22:44:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:59.899 22:44:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:59.899 22:44:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:59.899 22:44:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:59.899 22:44:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:59.899 22:44:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1543 -- # continue 00:03:59.899 22:44:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:59.899 22:44:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:59.899 22:44:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:59.899 22:44:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:03:59.899 22:44:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:59.899 22:44:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:59.899 22:44:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:59.899 22:44:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1543 -- # continue 00:03:59.899 22:44:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:59.899 22:44:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:59.899 22:44:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:03:59.899 22:44:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:59.899 22:44:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:59.899 22:44:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:59.899 22:44:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:03:59.899 22:44:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:03:59.899 22:44:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:59.899 22:44:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:59.899 22:44:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:59.899 22:44:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:59.899 22:44:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:59.899 22:44:38 -- common/autotest_common.sh@1543 -- # continue 00:03:59.899 22:44:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:59.899 22:44:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:59.899 22:44:38 -- common/autotest_common.sh@10 -- # set +x 00:03:59.899 22:44:38 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:59.899 22:44:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.899 22:44:38 -- common/autotest_common.sh@10 -- # set +x 00:03:59.899 22:44:38 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.733 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.733 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.995 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.995 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.995 22:44:39 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:00.995 22:44:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.995 22:44:39 -- common/autotest_common.sh@10 -- # set +x 00:04:00.995 22:44:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:00.995 22:44:40 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:00.995 22:44:40 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:00.995 22:44:40 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:00.995 22:44:40 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:00.995 22:44:40 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:00.995 22:44:40 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:00.995 22:44:40 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:00.995 22:44:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:00.995 22:44:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:00.995 22:44:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.995 22:44:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:00.995 22:44:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:00.995 22:44:40 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:00.995 22:44:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:00.995 22:44:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:00.995 22:44:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:00.995 22:44:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:00.995 22:44:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:00.995 22:44:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:00.995 22:44:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:00.995 22:44:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:00.995 22:44:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:00.995 22:44:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:00.995 22:44:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:00.995 22:44:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:00.995 22:44:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:00.995 22:44:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:00.995 22:44:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:00.995 22:44:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:00.995 22:44:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:00.995 22:44:40 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:00.995 22:44:40 -- common/autotest_common.sh@1572 -- # return 0 00:04:00.995 22:44:40 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:00.995 22:44:40 -- common/autotest_common.sh@1580 -- # return 0 00:04:00.995 22:44:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:00.995 22:44:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:00.995 22:44:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:00.995 22:44:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:00.995 22:44:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:00.995 22:44:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.995 22:44:40 -- common/autotest_common.sh@10 -- # set +x 00:04:01.257 22:44:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:01.257 22:44:40 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:01.257 22:44:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.257 22:44:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.257 22:44:40 -- common/autotest_common.sh@10 -- # set +x 00:04:01.257 ************************************ 00:04:01.257 START TEST env 00:04:01.257 ************************************ 00:04:01.257 22:44:40 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:01.257 * Looking for test storage... 00:04:01.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:01.257 22:44:40 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:01.257 22:44:40 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:01.257 22:44:40 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:01.257 22:44:40 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:01.257 22:44:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.257 22:44:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.257 22:44:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.257 22:44:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.257 22:44:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.257 22:44:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.257 22:44:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.257 22:44:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.257 22:44:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.257 22:44:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.257 22:44:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.257 22:44:40 env -- scripts/common.sh@344 -- # case "$op" in 00:04:01.257 22:44:40 env -- scripts/common.sh@345 -- # : 1 00:04:01.258 22:44:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.258 22:44:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.258 22:44:40 env -- scripts/common.sh@365 -- # decimal 1 00:04:01.258 22:44:40 env -- scripts/common.sh@353 -- # local d=1 00:04:01.258 22:44:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.258 22:44:40 env -- scripts/common.sh@355 -- # echo 1 00:04:01.258 22:44:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.258 22:44:40 env -- scripts/common.sh@366 -- # decimal 2 00:04:01.258 22:44:40 env -- scripts/common.sh@353 -- # local d=2 00:04:01.258 22:44:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.258 22:44:40 env -- scripts/common.sh@355 -- # echo 2 00:04:01.258 22:44:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.258 22:44:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.258 22:44:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.258 22:44:40 env -- scripts/common.sh@368 -- # return 0 00:04:01.258 22:44:40 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.258 22:44:40 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.258 --rc genhtml_branch_coverage=1 00:04:01.258 --rc genhtml_function_coverage=1 00:04:01.258 --rc genhtml_legend=1 00:04:01.258 --rc geninfo_all_blocks=1 00:04:01.258 --rc geninfo_unexecuted_blocks=1 00:04:01.258 00:04:01.258 ' 00:04:01.258 22:44:40 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.258 --rc genhtml_branch_coverage=1 00:04:01.258 --rc genhtml_function_coverage=1 00:04:01.258 --rc genhtml_legend=1 00:04:01.258 --rc geninfo_all_blocks=1 00:04:01.258 --rc geninfo_unexecuted_blocks=1 00:04:01.258 00:04:01.258 ' 00:04:01.258 22:44:40 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.258 --rc genhtml_branch_coverage=1 00:04:01.258 --rc genhtml_function_coverage=1 00:04:01.258 --rc genhtml_legend=1 00:04:01.258 --rc geninfo_all_blocks=1 00:04:01.258 --rc geninfo_unexecuted_blocks=1 00:04:01.258 00:04:01.258 ' 00:04:01.258 22:44:40 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:01.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.258 --rc genhtml_branch_coverage=1 00:04:01.258 --rc genhtml_function_coverage=1 00:04:01.258 --rc genhtml_legend=1 00:04:01.258 --rc geninfo_all_blocks=1 00:04:01.258 --rc geninfo_unexecuted_blocks=1 00:04:01.258 00:04:01.258 ' 00:04:01.258 22:44:40 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:01.258 22:44:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.258 22:44:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.258 22:44:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.258 ************************************ 00:04:01.258 START TEST env_memory 00:04:01.258 ************************************ 00:04:01.258 22:44:40 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:01.258 00:04:01.258 00:04:01.258 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.258 http://cunit.sourceforge.net/ 00:04:01.258 00:04:01.258 00:04:01.258 Suite: memory 00:04:01.519 Test: alloc and free memory map ...[2024-12-13 22:44:40.401246] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:01.519 passed 00:04:01.519 Test: mem map translation ...[2024-12-13 22:44:40.440054] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:01.519 [2024-12-13 22:44:40.440095] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:01.519 [2024-12-13 22:44:40.440156] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:01.519 [2024-12-13 22:44:40.440171] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:01.519 passed 00:04:01.519 Test: mem map registration ...[2024-12-13 22:44:40.508330] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:01.519 [2024-12-13 22:44:40.508368] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:01.519 passed 00:04:01.519 Test: mem map adjacent registrations ...passed 00:04:01.519 00:04:01.519 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.519 suites 1 1 n/a 0 0 00:04:01.519 tests 4 4 4 0 0 00:04:01.519 asserts 152 152 152 0 n/a 00:04:01.519 00:04:01.519 Elapsed time = 0.233 seconds 00:04:01.519 00:04:01.519 real 0m0.268s 00:04:01.519 user 0m0.238s 00:04:01.519 sys 0m0.022s 00:04:01.519 ************************************ 00:04:01.519 END TEST env_memory 00:04:01.519 22:44:40 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.519 22:44:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:01.519 ************************************ 00:04:01.780 22:44:40 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:01.780 22:44:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.780 22:44:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.780 22:44:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.780 ************************************ 00:04:01.780 START TEST env_vtophys 00:04:01.780 ************************************ 00:04:01.780 22:44:40 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:01.780 EAL: lib.eal log level changed from notice to debug 00:04:01.780 EAL: Detected lcore 0 as core 0 on socket 0 00:04:01.780 EAL: Detected lcore 1 as core 0 on socket 0 00:04:01.780 EAL: Detected lcore 2 as core 0 on socket 0 00:04:01.780 EAL: Detected lcore 3 as core 0 on socket 0 00:04:01.780 EAL: Detected lcore 4 as core 0 on socket 0 00:04:01.780 EAL: Detected lcore 5 as core 0 on socket 0 00:04:01.780 EAL: Detected lcore 6 as core 0 on socket 0 00:04:01.780 EAL: Detected lcore 7 as core 0 on socket 0 00:04:01.780 EAL: Detected lcore 8 as core 0 on socket 0 00:04:01.780 EAL: Detected lcore 9 as core 0 on socket 0 00:04:01.780 EAL: Maximum logical cores by configuration: 128 00:04:01.780 EAL: Detected CPU lcores: 10 00:04:01.780 EAL: Detected NUMA nodes: 1 00:04:01.780 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:01.780 EAL: Detected shared linkage of DPDK 00:04:01.780 EAL: No shared files mode enabled, IPC will be disabled 00:04:01.780 EAL: Selected IOVA mode 'PA' 00:04:01.780 EAL: Probing VFIO support... 00:04:01.780 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:01.780 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:01.780 EAL: Ask a virtual area of 0x2e000 bytes 00:04:01.780 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:01.780 EAL: Setting up physically contiguous memory... 00:04:01.780 EAL: Setting maximum number of open files to 524288 00:04:01.780 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:01.780 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:01.780 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.780 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:01.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.780 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.780 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:01.780 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:01.780 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.780 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:01.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.780 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.780 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:01.780 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:01.780 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.780 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:01.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.780 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.780 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:01.780 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:01.780 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.780 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:01.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.780 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.780 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:01.780 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:01.780 EAL: Hugepages will be freed exactly as allocated. 00:04:01.780 EAL: No shared files mode enabled, IPC is disabled 00:04:01.780 EAL: No shared files mode enabled, IPC is disabled 00:04:01.780 EAL: TSC frequency is ~2600000 KHz 00:04:01.780 EAL: Main lcore 0 is ready (tid=7f9e4ca19a40;cpuset=[0]) 00:04:01.780 EAL: Trying to obtain current memory policy. 00:04:01.780 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.780 EAL: Restoring previous memory policy: 0 00:04:01.780 EAL: request: mp_malloc_sync 00:04:01.780 EAL: No shared files mode enabled, IPC is disabled 00:04:01.780 EAL: Heap on socket 0 was expanded by 2MB 00:04:01.780 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:01.780 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:01.780 EAL: Mem event callback 'spdk:(nil)' registered 00:04:01.780 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:01.780 00:04:01.780 00:04:01.780 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.780 http://cunit.sourceforge.net/ 00:04:01.780 00:04:01.780 00:04:01.780 Suite: components_suite 00:04:02.349 Test: vtophys_malloc_test ...passed 00:04:02.349 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:02.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.349 EAL: Restoring previous memory policy: 4 00:04:02.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.349 EAL: request: mp_malloc_sync 00:04:02.349 EAL: No shared files mode enabled, IPC is disabled 00:04:02.349 EAL: Heap on socket 0 was expanded by 4MB 00:04:02.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.349 EAL: request: mp_malloc_sync 00:04:02.349 EAL: No shared files mode enabled, IPC is disabled 00:04:02.349 EAL: Heap on socket 0 was shrunk by 4MB 00:04:02.349 EAL: Trying to obtain current memory policy. 00:04:02.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.349 EAL: Restoring previous memory policy: 4 00:04:02.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.349 EAL: request: mp_malloc_sync 00:04:02.349 EAL: No shared files mode enabled, IPC is disabled 00:04:02.349 EAL: Heap on socket 0 was expanded by 6MB 00:04:02.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.349 EAL: request: mp_malloc_sync 00:04:02.349 EAL: No shared files mode enabled, IPC is disabled 00:04:02.349 EAL: Heap on socket 0 was shrunk by 6MB 00:04:02.349 EAL: Trying to obtain current memory policy. 00:04:02.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.349 EAL: Restoring previous memory policy: 4 00:04:02.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.349 EAL: request: mp_malloc_sync 00:04:02.349 EAL: No shared files mode enabled, IPC is disabled 00:04:02.349 EAL: Heap on socket 0 was expanded by 10MB 00:04:02.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.349 EAL: request: mp_malloc_sync 00:04:02.349 EAL: No shared files mode enabled, IPC is disabled 00:04:02.349 EAL: Heap on socket 0 was shrunk by 10MB 00:04:02.349 EAL: Trying to obtain current memory policy. 00:04:02.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.349 EAL: Restoring previous memory policy: 4 00:04:02.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.349 EAL: request: mp_malloc_sync 00:04:02.349 EAL: No shared files mode enabled, IPC is disabled 00:04:02.349 EAL: Heap on socket 0 was expanded by 18MB 00:04:02.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.349 EAL: request: mp_malloc_sync 00:04:02.349 EAL: No shared files mode enabled, IPC is disabled 00:04:02.349 EAL: Heap on socket 0 was shrunk by 18MB 00:04:02.349 EAL: Trying to obtain current memory policy. 00:04:02.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.349 EAL: Restoring previous memory policy: 4 00:04:02.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.349 EAL: request: mp_malloc_sync 00:04:02.349 EAL: No shared files mode enabled, IPC is disabled 00:04:02.349 EAL: Heap on socket 0 was expanded by 34MB 00:04:02.349 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.349 EAL: request: mp_malloc_sync 00:04:02.349 EAL: No shared files mode enabled, IPC is disabled 00:04:02.349 EAL: Heap on socket 0 was shrunk by 34MB 00:04:02.349 EAL: Trying to obtain current memory policy. 00:04:02.349 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.350 EAL: Restoring previous memory policy: 4 00:04:02.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.350 EAL: request: mp_malloc_sync 00:04:02.350 EAL: No shared files mode enabled, IPC is disabled 00:04:02.350 EAL: Heap on socket 0 was expanded by 66MB 00:04:02.609 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.609 EAL: request: mp_malloc_sync 00:04:02.609 EAL: No shared files mode enabled, IPC is disabled 00:04:02.609 EAL: Heap on socket 0 was shrunk by 66MB 00:04:02.609 EAL: Trying to obtain current memory policy. 00:04:02.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.609 EAL: Restoring previous memory policy: 4 00:04:02.609 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.609 EAL: request: mp_malloc_sync 00:04:02.609 EAL: No shared files mode enabled, IPC is disabled 00:04:02.609 EAL: Heap on socket 0 was expanded by 130MB 00:04:02.869 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.869 EAL: request: mp_malloc_sync 00:04:02.869 EAL: No shared files mode enabled, IPC is disabled 00:04:02.869 EAL: Heap on socket 0 was shrunk by 130MB 00:04:02.869 EAL: Trying to obtain current memory policy. 00:04:02.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.869 EAL: Restoring previous memory policy: 4 00:04:02.869 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.869 EAL: request: mp_malloc_sync 00:04:02.869 EAL: No shared files mode enabled, IPC is disabled 00:04:02.869 EAL: Heap on socket 0 was expanded by 258MB 00:04:03.129 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.389 EAL: request: mp_malloc_sync 00:04:03.389 EAL: No shared files mode enabled, IPC is disabled 00:04:03.389 EAL: Heap on socket 0 was shrunk by 258MB 00:04:03.649 EAL: Trying to obtain current memory policy. 00:04:03.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.649 EAL: Restoring previous memory policy: 4 00:04:03.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.649 EAL: request: mp_malloc_sync 00:04:03.649 EAL: No shared files mode enabled, IPC is disabled 00:04:03.649 EAL: Heap on socket 0 was expanded by 514MB 00:04:04.219 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.479 EAL: request: mp_malloc_sync 00:04:04.479 EAL: No shared files mode enabled, IPC is disabled 00:04:04.479 EAL: Heap on socket 0 was shrunk by 514MB 00:04:05.008 EAL: Trying to obtain current memory policy. 00:04:05.008 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.008 EAL: Restoring previous memory policy: 4 00:04:05.008 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.008 EAL: request: mp_malloc_sync 00:04:05.008 EAL: No shared files mode enabled, IPC is disabled 00:04:05.008 EAL: Heap on socket 0 was expanded by 1026MB 00:04:05.941 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.941 EAL: request: mp_malloc_sync 00:04:05.941 EAL: No shared files mode enabled, IPC is disabled 00:04:05.941 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:06.880 passed 00:04:06.880 00:04:06.880 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.880 suites 1 1 n/a 0 0 00:04:06.880 tests 2 2 2 0 0 00:04:06.880 asserts 5663 5663 5663 0 n/a 00:04:06.880 00:04:06.880 Elapsed time = 4.892 seconds 00:04:06.880 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.880 EAL: request: mp_malloc_sync 00:04:06.880 EAL: No shared files mode enabled, IPC is disabled 00:04:06.880 EAL: Heap on socket 0 was shrunk by 2MB 00:04:06.880 EAL: No shared files mode enabled, IPC is disabled 00:04:06.880 EAL: No shared files mode enabled, IPC is disabled 00:04:06.880 EAL: No shared files mode enabled, IPC is disabled 00:04:06.880 00:04:06.880 real 0m5.168s 00:04:06.880 user 0m4.177s 00:04:06.880 sys 0m0.839s 00:04:06.880 22:44:45 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.880 22:44:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:06.880 ************************************ 00:04:06.880 END TEST env_vtophys 00:04:06.880 ************************************ 00:04:06.880 22:44:45 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:06.880 22:44:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.880 22:44:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.880 22:44:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.880 ************************************ 00:04:06.880 START TEST env_pci 00:04:06.880 ************************************ 00:04:06.880 22:44:45 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:06.880 00:04:06.880 00:04:06.880 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.880 http://cunit.sourceforge.net/ 00:04:06.880 00:04:06.880 00:04:06.880 Suite: pci 00:04:06.880 Test: pci_hook ...[2024-12-13 22:44:45.928372] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58789 has claimed it 00:04:06.880 passed 00:04:06.880 00:04:06.880 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.880 suites 1 1 n/a 0 0 00:04:06.880 tests 1 1 1 0 0 00:04:06.880 asserts 25 25 25 0 n/a 00:04:06.880 00:04:06.880 Elapsed time = 0.004 seconds 00:04:06.880 EAL: Cannot find device (10000:00:01.0) 00:04:06.880 EAL: Failed to attach device on primary process 00:04:06.880 00:04:06.880 real 0m0.052s 00:04:06.880 user 0m0.025s 00:04:06.880 sys 0m0.027s 00:04:06.880 22:44:45 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.880 ************************************ 00:04:06.880 END TEST env_pci 00:04:06.880 ************************************ 00:04:06.880 22:44:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:06.880 22:44:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:06.880 22:44:45 env -- env/env.sh@15 -- # uname 00:04:06.880 22:44:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:06.880 22:44:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:06.880 22:44:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.880 22:44:46 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:06.880 22:44:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.880 22:44:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.880 ************************************ 00:04:06.880 START TEST env_dpdk_post_init 00:04:06.880 ************************************ 00:04:06.880 22:44:46 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.141 EAL: Detected CPU lcores: 10 00:04:07.141 EAL: Detected NUMA nodes: 1 00:04:07.141 EAL: Detected shared linkage of DPDK 00:04:07.141 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.141 EAL: Selected IOVA mode 'PA' 00:04:07.141 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.141 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:07.141 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:07.141 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:07.141 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:07.141 Starting DPDK initialization... 00:04:07.141 Starting SPDK post initialization... 00:04:07.141 SPDK NVMe probe 00:04:07.141 Attaching to 0000:00:10.0 00:04:07.141 Attaching to 0000:00:11.0 00:04:07.141 Attaching to 0000:00:12.0 00:04:07.141 Attaching to 0000:00:13.0 00:04:07.141 Attached to 0000:00:10.0 00:04:07.141 Attached to 0000:00:11.0 00:04:07.141 Attached to 0000:00:13.0 00:04:07.141 Attached to 0000:00:12.0 00:04:07.141 Cleaning up... 00:04:07.141 00:04:07.141 real 0m0.249s 00:04:07.141 user 0m0.084s 00:04:07.141 sys 0m0.067s 00:04:07.141 22:44:46 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.141 ************************************ 00:04:07.141 END TEST env_dpdk_post_init 00:04:07.141 ************************************ 00:04:07.141 22:44:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:07.401 22:44:46 env -- env/env.sh@26 -- # uname 00:04:07.401 22:44:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:07.401 22:44:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.401 22:44:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.401 22:44:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.401 22:44:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.401 ************************************ 00:04:07.401 START TEST env_mem_callbacks 00:04:07.401 ************************************ 00:04:07.401 22:44:46 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.401 EAL: Detected CPU lcores: 10 00:04:07.401 EAL: Detected NUMA nodes: 1 00:04:07.401 EAL: Detected shared linkage of DPDK 00:04:07.401 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.401 EAL: Selected IOVA mode 'PA' 00:04:07.401 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.401 00:04:07.401 00:04:07.401 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.401 http://cunit.sourceforge.net/ 00:04:07.401 00:04:07.401 00:04:07.401 Suite: memory 00:04:07.401 Test: test ... 00:04:07.401 register 0x200000200000 2097152 00:04:07.401 malloc 3145728 00:04:07.401 register 0x200000400000 4194304 00:04:07.401 buf 0x2000004fffc0 len 3145728 PASSED 00:04:07.401 malloc 64 00:04:07.401 buf 0x2000004ffec0 len 64 PASSED 00:04:07.401 malloc 4194304 00:04:07.401 register 0x200000800000 6291456 00:04:07.401 buf 0x2000009fffc0 len 4194304 PASSED 00:04:07.401 free 0x2000004fffc0 3145728 00:04:07.401 free 0x2000004ffec0 64 00:04:07.401 unregister 0x200000400000 4194304 PASSED 00:04:07.401 free 0x2000009fffc0 4194304 00:04:07.401 unregister 0x200000800000 6291456 PASSED 00:04:07.401 malloc 8388608 00:04:07.401 register 0x200000400000 10485760 00:04:07.401 buf 0x2000005fffc0 len 8388608 PASSED 00:04:07.401 free 0x2000005fffc0 8388608 00:04:07.401 unregister 0x200000400000 10485760 PASSED 00:04:07.401 passed 00:04:07.401 00:04:07.401 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.401 suites 1 1 n/a 0 0 00:04:07.401 tests 1 1 1 0 0 00:04:07.401 asserts 15 15 15 0 n/a 00:04:07.401 00:04:07.401 Elapsed time = 0.050 seconds 00:04:07.662 00:04:07.662 real 0m0.225s 00:04:07.662 user 0m0.068s 00:04:07.662 sys 0m0.054s 00:04:07.662 22:44:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.662 ************************************ 00:04:07.662 END TEST env_mem_callbacks 00:04:07.662 ************************************ 00:04:07.662 22:44:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:07.662 00:04:07.662 real 0m6.450s 00:04:07.662 user 0m4.784s 00:04:07.662 sys 0m1.215s 00:04:07.662 22:44:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.662 ************************************ 00:04:07.662 END TEST env 00:04:07.662 ************************************ 00:04:07.662 22:44:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.662 22:44:46 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:07.662 22:44:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.662 22:44:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.662 22:44:46 -- common/autotest_common.sh@10 -- # set +x 00:04:07.662 ************************************ 00:04:07.662 START TEST rpc 00:04:07.662 ************************************ 00:04:07.662 22:44:46 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:07.662 * Looking for test storage... 00:04:07.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:07.662 22:44:46 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:07.662 22:44:46 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:07.662 22:44:46 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:07.662 22:44:46 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:07.662 22:44:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.662 22:44:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.662 22:44:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.662 22:44:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.662 22:44:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.662 22:44:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.922 22:44:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.922 22:44:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.922 22:44:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.922 22:44:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.922 22:44:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.922 22:44:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:07.922 22:44:46 rpc -- scripts/common.sh@345 -- # : 1 00:04:07.922 22:44:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.923 22:44:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.923 22:44:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:07.923 22:44:46 rpc -- scripts/common.sh@353 -- # local d=1 00:04:07.923 22:44:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.923 22:44:46 rpc -- scripts/common.sh@355 -- # echo 1 00:04:07.923 22:44:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.923 22:44:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:07.923 22:44:46 rpc -- scripts/common.sh@353 -- # local d=2 00:04:07.923 22:44:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.923 22:44:46 rpc -- scripts/common.sh@355 -- # echo 2 00:04:07.923 22:44:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.923 22:44:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.923 22:44:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.923 22:44:46 rpc -- scripts/common.sh@368 -- # return 0 00:04:07.923 22:44:46 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.923 22:44:46 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:07.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.923 --rc genhtml_branch_coverage=1 00:04:07.923 --rc genhtml_function_coverage=1 00:04:07.923 --rc genhtml_legend=1 00:04:07.923 --rc geninfo_all_blocks=1 00:04:07.923 --rc geninfo_unexecuted_blocks=1 00:04:07.923 00:04:07.923 ' 00:04:07.923 22:44:46 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:07.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.923 --rc genhtml_branch_coverage=1 00:04:07.923 --rc genhtml_function_coverage=1 00:04:07.923 --rc genhtml_legend=1 00:04:07.923 --rc geninfo_all_blocks=1 00:04:07.923 --rc geninfo_unexecuted_blocks=1 00:04:07.923 00:04:07.923 ' 00:04:07.923 22:44:46 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:07.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.923 --rc genhtml_branch_coverage=1 00:04:07.923 --rc genhtml_function_coverage=1 00:04:07.923 --rc genhtml_legend=1 00:04:07.923 --rc geninfo_all_blocks=1 00:04:07.923 --rc geninfo_unexecuted_blocks=1 00:04:07.923 00:04:07.923 ' 00:04:07.923 22:44:46 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:07.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.923 --rc genhtml_branch_coverage=1 00:04:07.923 --rc genhtml_function_coverage=1 00:04:07.923 --rc genhtml_legend=1 00:04:07.923 --rc geninfo_all_blocks=1 00:04:07.923 --rc geninfo_unexecuted_blocks=1 00:04:07.923 00:04:07.923 ' 00:04:07.923 22:44:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58916 00:04:07.923 22:44:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.923 22:44:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58916 00:04:07.923 22:44:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 58916 ']' 00:04:07.923 22:44:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:07.923 22:44:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.923 22:44:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.923 22:44:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.923 22:44:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.923 22:44:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.923 [2024-12-13 22:44:46.903490] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:07.923 [2024-12-13 22:44:46.903647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58916 ] 00:04:08.183 [2024-12-13 22:44:47.067095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.183 [2024-12-13 22:44:47.186097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:08.183 [2024-12-13 22:44:47.186163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58916' to capture a snapshot of events at runtime. 00:04:08.183 [2024-12-13 22:44:47.186174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:08.183 [2024-12-13 22:44:47.186185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:08.183 [2024-12-13 22:44:47.186193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58916 for offline analysis/debug. 00:04:08.183 [2024-12-13 22:44:47.187164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.753 22:44:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.753 22:44:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:08.753 22:44:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.753 22:44:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.753 22:44:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:08.753 22:44:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:08.753 22:44:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.753 22:44:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.753 22:44:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.753 ************************************ 00:04:08.753 START TEST rpc_integrity 00:04:08.753 ************************************ 00:04:08.753 22:44:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:09.014 22:44:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.014 22:44:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.014 22:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.014 22:44:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.014 22:44:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.014 22:44:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.014 22:44:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.014 22:44:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.014 22:44:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.014 22:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.014 22:44:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.014 22:44:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:09.014 22:44:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.014 22:44:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.014 22:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.014 22:44:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.014 22:44:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.014 { 00:04:09.014 "name": "Malloc0", 00:04:09.014 "aliases": [ 00:04:09.014 "79b2e634-355a-4e14-99ef-a05d4841d0af" 00:04:09.014 ], 00:04:09.014 "product_name": "Malloc disk", 00:04:09.014 "block_size": 512, 00:04:09.014 "num_blocks": 16384, 00:04:09.014 "uuid": "79b2e634-355a-4e14-99ef-a05d4841d0af", 00:04:09.014 "assigned_rate_limits": { 00:04:09.014 "rw_ios_per_sec": 0, 00:04:09.015 "rw_mbytes_per_sec": 0, 00:04:09.015 "r_mbytes_per_sec": 0, 00:04:09.015 "w_mbytes_per_sec": 0 00:04:09.015 }, 00:04:09.015 "claimed": false, 00:04:09.015 "zoned": false, 00:04:09.015 "supported_io_types": { 00:04:09.015 "read": true, 00:04:09.015 "write": true, 00:04:09.015 "unmap": true, 00:04:09.015 "flush": true, 00:04:09.015 "reset": true, 00:04:09.015 "nvme_admin": false, 00:04:09.015 "nvme_io": false, 00:04:09.015 "nvme_io_md": false, 00:04:09.015 "write_zeroes": true, 00:04:09.015 "zcopy": true, 00:04:09.015 "get_zone_info": false, 00:04:09.015 "zone_management": false, 00:04:09.015 "zone_append": false, 00:04:09.015 "compare": false, 00:04:09.015 "compare_and_write": false, 00:04:09.015 "abort": true, 00:04:09.015 "seek_hole": false, 00:04:09.015 "seek_data": false, 00:04:09.015 "copy": true, 00:04:09.015 "nvme_iov_md": false 00:04:09.015 }, 00:04:09.015 "memory_domains": [ 00:04:09.015 { 00:04:09.015 "dma_device_id": "system", 00:04:09.015 "dma_device_type": 1 00:04:09.015 }, 00:04:09.015 { 00:04:09.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.015 "dma_device_type": 2 00:04:09.015 } 00:04:09.015 ], 00:04:09.015 "driver_specific": {} 00:04:09.015 } 00:04:09.015 ]' 00:04:09.015 22:44:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.015 [2024-12-13 22:44:48.006855] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:09.015 [2024-12-13 22:44:48.006932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.015 [2024-12-13 22:44:48.006959] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:09.015 [2024-12-13 22:44:48.006971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.015 [2024-12-13 22:44:48.009450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.015 [2024-12-13 22:44:48.009508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.015 Passthru0 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.015 { 00:04:09.015 "name": "Malloc0", 00:04:09.015 "aliases": [ 00:04:09.015 "79b2e634-355a-4e14-99ef-a05d4841d0af" 00:04:09.015 ], 00:04:09.015 "product_name": "Malloc disk", 00:04:09.015 "block_size": 512, 00:04:09.015 "num_blocks": 16384, 00:04:09.015 "uuid": "79b2e634-355a-4e14-99ef-a05d4841d0af", 00:04:09.015 "assigned_rate_limits": { 00:04:09.015 "rw_ios_per_sec": 0, 00:04:09.015 "rw_mbytes_per_sec": 0, 00:04:09.015 "r_mbytes_per_sec": 0, 00:04:09.015 "w_mbytes_per_sec": 0 00:04:09.015 }, 00:04:09.015 "claimed": true, 00:04:09.015 "claim_type": "exclusive_write", 00:04:09.015 "zoned": false, 00:04:09.015 "supported_io_types": { 00:04:09.015 "read": true, 00:04:09.015 "write": true, 00:04:09.015 "unmap": true, 00:04:09.015 "flush": true, 00:04:09.015 "reset": true, 00:04:09.015 "nvme_admin": false, 00:04:09.015 "nvme_io": false, 00:04:09.015 "nvme_io_md": false, 00:04:09.015 "write_zeroes": true, 00:04:09.015 "zcopy": true, 00:04:09.015 "get_zone_info": false, 00:04:09.015 "zone_management": false, 00:04:09.015 "zone_append": false, 00:04:09.015 "compare": false, 00:04:09.015 "compare_and_write": false, 00:04:09.015 "abort": true, 00:04:09.015 "seek_hole": false, 00:04:09.015 "seek_data": false, 00:04:09.015 "copy": true, 00:04:09.015 "nvme_iov_md": false 00:04:09.015 }, 00:04:09.015 "memory_domains": [ 00:04:09.015 { 00:04:09.015 "dma_device_id": "system", 00:04:09.015 "dma_device_type": 1 00:04:09.015 }, 00:04:09.015 { 00:04:09.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.015 "dma_device_type": 2 00:04:09.015 } 00:04:09.015 ], 00:04:09.015 "driver_specific": {} 00:04:09.015 }, 00:04:09.015 { 00:04:09.015 "name": "Passthru0", 00:04:09.015 "aliases": [ 00:04:09.015 "e2f253f2-7d2a-56ff-9dd3-266eb75e6228" 00:04:09.015 ], 00:04:09.015 "product_name": "passthru", 00:04:09.015 "block_size": 512, 00:04:09.015 "num_blocks": 16384, 00:04:09.015 "uuid": "e2f253f2-7d2a-56ff-9dd3-266eb75e6228", 00:04:09.015 "assigned_rate_limits": { 00:04:09.015 "rw_ios_per_sec": 0, 00:04:09.015 "rw_mbytes_per_sec": 0, 00:04:09.015 "r_mbytes_per_sec": 0, 00:04:09.015 "w_mbytes_per_sec": 0 00:04:09.015 }, 00:04:09.015 "claimed": false, 00:04:09.015 "zoned": false, 00:04:09.015 "supported_io_types": { 00:04:09.015 "read": true, 00:04:09.015 "write": true, 00:04:09.015 "unmap": true, 00:04:09.015 "flush": true, 00:04:09.015 "reset": true, 00:04:09.015 "nvme_admin": false, 00:04:09.015 "nvme_io": false, 00:04:09.015 "nvme_io_md": false, 00:04:09.015 "write_zeroes": true, 00:04:09.015 "zcopy": true, 00:04:09.015 "get_zone_info": false, 00:04:09.015 "zone_management": false, 00:04:09.015 "zone_append": false, 00:04:09.015 "compare": false, 00:04:09.015 "compare_and_write": false, 00:04:09.015 "abort": true, 00:04:09.015 "seek_hole": false, 00:04:09.015 "seek_data": false, 00:04:09.015 "copy": true, 00:04:09.015 "nvme_iov_md": false 00:04:09.015 }, 00:04:09.015 "memory_domains": [ 00:04:09.015 { 00:04:09.015 "dma_device_id": "system", 00:04:09.015 "dma_device_type": 1 00:04:09.015 }, 00:04:09.015 { 00:04:09.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.015 "dma_device_type": 2 00:04:09.015 } 00:04:09.015 ], 00:04:09.015 "driver_specific": { 00:04:09.015 "passthru": { 00:04:09.015 "name": "Passthru0", 00:04:09.015 "base_bdev_name": "Malloc0" 00:04:09.015 } 00:04:09.015 } 00:04:09.015 } 00:04:09.015 ]' 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.015 ************************************ 00:04:09.015 END TEST rpc_integrity 00:04:09.015 ************************************ 00:04:09.015 22:44:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.015 00:04:09.015 real 0m0.248s 00:04:09.015 user 0m0.132s 00:04:09.015 sys 0m0.031s 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.015 22:44:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.275 22:44:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:09.275 22:44:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.275 22:44:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.275 22:44:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.275 ************************************ 00:04:09.275 START TEST rpc_plugins 00:04:09.275 ************************************ 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:09.275 22:44:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.275 22:44:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:09.275 22:44:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.275 22:44:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:09.275 { 00:04:09.275 "name": "Malloc1", 00:04:09.275 "aliases": [ 00:04:09.275 "281f3871-3ffa-4639-81df-7a06006a9dc0" 00:04:09.275 ], 00:04:09.275 "product_name": "Malloc disk", 00:04:09.275 "block_size": 4096, 00:04:09.275 "num_blocks": 256, 00:04:09.275 "uuid": "281f3871-3ffa-4639-81df-7a06006a9dc0", 00:04:09.275 "assigned_rate_limits": { 00:04:09.275 "rw_ios_per_sec": 0, 00:04:09.275 "rw_mbytes_per_sec": 0, 00:04:09.275 "r_mbytes_per_sec": 0, 00:04:09.275 "w_mbytes_per_sec": 0 00:04:09.275 }, 00:04:09.275 "claimed": false, 00:04:09.275 "zoned": false, 00:04:09.275 "supported_io_types": { 00:04:09.275 "read": true, 00:04:09.275 "write": true, 00:04:09.275 "unmap": true, 00:04:09.275 "flush": true, 00:04:09.275 "reset": true, 00:04:09.275 "nvme_admin": false, 00:04:09.275 "nvme_io": false, 00:04:09.275 "nvme_io_md": false, 00:04:09.275 "write_zeroes": true, 00:04:09.275 "zcopy": true, 00:04:09.275 "get_zone_info": false, 00:04:09.275 "zone_management": false, 00:04:09.275 "zone_append": false, 00:04:09.275 "compare": false, 00:04:09.275 "compare_and_write": false, 00:04:09.275 "abort": true, 00:04:09.275 "seek_hole": false, 00:04:09.275 "seek_data": false, 00:04:09.275 "copy": true, 00:04:09.275 "nvme_iov_md": false 00:04:09.275 }, 00:04:09.275 "memory_domains": [ 00:04:09.275 { 00:04:09.275 "dma_device_id": "system", 00:04:09.275 "dma_device_type": 1 00:04:09.275 }, 00:04:09.275 { 00:04:09.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.275 "dma_device_type": 2 00:04:09.275 } 00:04:09.275 ], 00:04:09.275 "driver_specific": {} 00:04:09.275 } 00:04:09.275 ]' 00:04:09.275 22:44:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:09.275 22:44:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:09.275 22:44:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.275 22:44:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.275 22:44:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:09.275 22:44:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:09.275 ************************************ 00:04:09.275 END TEST rpc_plugins 00:04:09.275 ************************************ 00:04:09.275 22:44:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:09.275 00:04:09.275 real 0m0.118s 00:04:09.275 user 0m0.066s 00:04:09.275 sys 0m0.016s 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.275 22:44:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.275 22:44:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:09.275 22:44:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.275 22:44:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.275 22:44:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.275 ************************************ 00:04:09.275 START TEST rpc_trace_cmd_test 00:04:09.275 ************************************ 00:04:09.275 22:44:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:09.275 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:09.275 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:09.275 22:44:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.275 22:44:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.275 22:44:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.275 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:09.275 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58916", 00:04:09.275 "tpoint_group_mask": "0x8", 00:04:09.275 "iscsi_conn": { 00:04:09.275 "mask": "0x2", 00:04:09.275 "tpoint_mask": "0x0" 00:04:09.275 }, 00:04:09.275 "scsi": { 00:04:09.275 "mask": "0x4", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "bdev": { 00:04:09.276 "mask": "0x8", 00:04:09.276 "tpoint_mask": "0xffffffffffffffff" 00:04:09.276 }, 00:04:09.276 "nvmf_rdma": { 00:04:09.276 "mask": "0x10", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "nvmf_tcp": { 00:04:09.276 "mask": "0x20", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "ftl": { 00:04:09.276 "mask": "0x40", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "blobfs": { 00:04:09.276 "mask": "0x80", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "dsa": { 00:04:09.276 "mask": "0x200", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "thread": { 00:04:09.276 "mask": "0x400", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "nvme_pcie": { 00:04:09.276 "mask": "0x800", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "iaa": { 00:04:09.276 "mask": "0x1000", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "nvme_tcp": { 00:04:09.276 "mask": "0x2000", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "bdev_nvme": { 00:04:09.276 "mask": "0x4000", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "sock": { 00:04:09.276 "mask": "0x8000", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "blob": { 00:04:09.276 "mask": "0x10000", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "bdev_raid": { 00:04:09.276 "mask": "0x20000", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 }, 00:04:09.276 "scheduler": { 00:04:09.276 "mask": "0x40000", 00:04:09.276 "tpoint_mask": "0x0" 00:04:09.276 } 00:04:09.276 }' 00:04:09.276 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:09.536 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:09.536 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:09.536 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:09.536 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:09.536 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:09.536 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:09.536 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:09.536 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:09.536 ************************************ 00:04:09.536 END TEST rpc_trace_cmd_test 00:04:09.536 ************************************ 00:04:09.536 22:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:09.536 00:04:09.536 real 0m0.168s 00:04:09.536 user 0m0.136s 00:04:09.536 sys 0m0.023s 00:04:09.536 22:44:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.536 22:44:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.536 22:44:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:09.536 22:44:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.536 22:44:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.536 22:44:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.536 22:44:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.536 22:44:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.536 ************************************ 00:04:09.536 START TEST rpc_daemon_integrity 00:04:09.536 ************************************ 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.536 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.823 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.823 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.823 { 00:04:09.823 "name": "Malloc2", 00:04:09.823 "aliases": [ 00:04:09.823 "e5f7244d-2740-4453-beeb-a1322849a95f" 00:04:09.823 ], 00:04:09.823 "product_name": "Malloc disk", 00:04:09.823 "block_size": 512, 00:04:09.823 "num_blocks": 16384, 00:04:09.823 "uuid": "e5f7244d-2740-4453-beeb-a1322849a95f", 00:04:09.823 "assigned_rate_limits": { 00:04:09.823 "rw_ios_per_sec": 0, 00:04:09.823 "rw_mbytes_per_sec": 0, 00:04:09.823 "r_mbytes_per_sec": 0, 00:04:09.823 "w_mbytes_per_sec": 0 00:04:09.823 }, 00:04:09.823 "claimed": false, 00:04:09.823 "zoned": false, 00:04:09.823 "supported_io_types": { 00:04:09.823 "read": true, 00:04:09.823 "write": true, 00:04:09.823 "unmap": true, 00:04:09.823 "flush": true, 00:04:09.823 "reset": true, 00:04:09.823 "nvme_admin": false, 00:04:09.823 "nvme_io": false, 00:04:09.823 "nvme_io_md": false, 00:04:09.823 "write_zeroes": true, 00:04:09.823 "zcopy": true, 00:04:09.823 "get_zone_info": false, 00:04:09.823 "zone_management": false, 00:04:09.823 "zone_append": false, 00:04:09.823 "compare": false, 00:04:09.823 "compare_and_write": false, 00:04:09.823 "abort": true, 00:04:09.823 "seek_hole": false, 00:04:09.823 "seek_data": false, 00:04:09.823 "copy": true, 00:04:09.823 "nvme_iov_md": false 00:04:09.823 }, 00:04:09.823 "memory_domains": [ 00:04:09.823 { 00:04:09.823 "dma_device_id": "system", 00:04:09.823 "dma_device_type": 1 00:04:09.823 }, 00:04:09.823 { 00:04:09.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.823 "dma_device_type": 2 00:04:09.823 } 00:04:09.823 ], 00:04:09.823 "driver_specific": {} 00:04:09.823 } 00:04:09.823 ]' 00:04:09.823 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.823 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.824 [2024-12-13 22:44:48.716913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:09.824 [2024-12-13 22:44:48.716982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.824 [2024-12-13 22:44:48.717006] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:09.824 [2024-12-13 22:44:48.717017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.824 [2024-12-13 22:44:48.719441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.824 [2024-12-13 22:44:48.719491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.824 Passthru0 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.824 { 00:04:09.824 "name": "Malloc2", 00:04:09.824 "aliases": [ 00:04:09.824 "e5f7244d-2740-4453-beeb-a1322849a95f" 00:04:09.824 ], 00:04:09.824 "product_name": "Malloc disk", 00:04:09.824 "block_size": 512, 00:04:09.824 "num_blocks": 16384, 00:04:09.824 "uuid": "e5f7244d-2740-4453-beeb-a1322849a95f", 00:04:09.824 "assigned_rate_limits": { 00:04:09.824 "rw_ios_per_sec": 0, 00:04:09.824 "rw_mbytes_per_sec": 0, 00:04:09.824 "r_mbytes_per_sec": 0, 00:04:09.824 "w_mbytes_per_sec": 0 00:04:09.824 }, 00:04:09.824 "claimed": true, 00:04:09.824 "claim_type": "exclusive_write", 00:04:09.824 "zoned": false, 00:04:09.824 "supported_io_types": { 00:04:09.824 "read": true, 00:04:09.824 "write": true, 00:04:09.824 "unmap": true, 00:04:09.824 "flush": true, 00:04:09.824 "reset": true, 00:04:09.824 "nvme_admin": false, 00:04:09.824 "nvme_io": false, 00:04:09.824 "nvme_io_md": false, 00:04:09.824 "write_zeroes": true, 00:04:09.824 "zcopy": true, 00:04:09.824 "get_zone_info": false, 00:04:09.824 "zone_management": false, 00:04:09.824 "zone_append": false, 00:04:09.824 "compare": false, 00:04:09.824 "compare_and_write": false, 00:04:09.824 "abort": true, 00:04:09.824 "seek_hole": false, 00:04:09.824 "seek_data": false, 00:04:09.824 "copy": true, 00:04:09.824 "nvme_iov_md": false 00:04:09.824 }, 00:04:09.824 "memory_domains": [ 00:04:09.824 { 00:04:09.824 "dma_device_id": "system", 00:04:09.824 "dma_device_type": 1 00:04:09.824 }, 00:04:09.824 { 00:04:09.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.824 "dma_device_type": 2 00:04:09.824 } 00:04:09.824 ], 00:04:09.824 "driver_specific": {} 00:04:09.824 }, 00:04:09.824 { 00:04:09.824 "name": "Passthru0", 00:04:09.824 "aliases": [ 00:04:09.824 "6d02c0f4-454e-5ae7-8188-ac09d869ba72" 00:04:09.824 ], 00:04:09.824 "product_name": "passthru", 00:04:09.824 "block_size": 512, 00:04:09.824 "num_blocks": 16384, 00:04:09.824 "uuid": "6d02c0f4-454e-5ae7-8188-ac09d869ba72", 00:04:09.824 "assigned_rate_limits": { 00:04:09.824 "rw_ios_per_sec": 0, 00:04:09.824 "rw_mbytes_per_sec": 0, 00:04:09.824 "r_mbytes_per_sec": 0, 00:04:09.824 "w_mbytes_per_sec": 0 00:04:09.824 }, 00:04:09.824 "claimed": false, 00:04:09.824 "zoned": false, 00:04:09.824 "supported_io_types": { 00:04:09.824 "read": true, 00:04:09.824 "write": true, 00:04:09.824 "unmap": true, 00:04:09.824 "flush": true, 00:04:09.824 "reset": true, 00:04:09.824 "nvme_admin": false, 00:04:09.824 "nvme_io": false, 00:04:09.824 "nvme_io_md": false, 00:04:09.824 "write_zeroes": true, 00:04:09.824 "zcopy": true, 00:04:09.824 "get_zone_info": false, 00:04:09.824 "zone_management": false, 00:04:09.824 "zone_append": false, 00:04:09.824 "compare": false, 00:04:09.824 "compare_and_write": false, 00:04:09.824 "abort": true, 00:04:09.824 "seek_hole": false, 00:04:09.824 "seek_data": false, 00:04:09.824 "copy": true, 00:04:09.824 "nvme_iov_md": false 00:04:09.824 }, 00:04:09.824 "memory_domains": [ 00:04:09.824 { 00:04:09.824 "dma_device_id": "system", 00:04:09.824 "dma_device_type": 1 00:04:09.824 }, 00:04:09.824 { 00:04:09.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.824 "dma_device_type": 2 00:04:09.824 } 00:04:09.824 ], 00:04:09.824 "driver_specific": { 00:04:09.824 "passthru": { 00:04:09.824 "name": "Passthru0", 00:04:09.824 "base_bdev_name": "Malloc2" 00:04:09.824 } 00:04:09.824 } 00:04:09.824 } 00:04:09.824 ]' 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.824 ************************************ 00:04:09.824 END TEST rpc_daemon_integrity 00:04:09.824 ************************************ 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.824 00:04:09.824 real 0m0.252s 00:04:09.824 user 0m0.138s 00:04:09.824 sys 0m0.029s 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.824 22:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.824 22:44:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:09.824 22:44:48 rpc -- rpc/rpc.sh@84 -- # killprocess 58916 00:04:09.824 22:44:48 rpc -- common/autotest_common.sh@954 -- # '[' -z 58916 ']' 00:04:09.824 22:44:48 rpc -- common/autotest_common.sh@958 -- # kill -0 58916 00:04:09.824 22:44:48 rpc -- common/autotest_common.sh@959 -- # uname 00:04:09.824 22:44:48 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:09.824 22:44:48 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58916 00:04:09.824 killing process with pid 58916 00:04:09.824 22:44:48 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:09.824 22:44:48 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:09.824 22:44:48 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58916' 00:04:09.824 22:44:48 rpc -- common/autotest_common.sh@973 -- # kill 58916 00:04:09.824 22:44:48 rpc -- common/autotest_common.sh@978 -- # wait 58916 00:04:11.733 00:04:11.733 real 0m3.844s 00:04:11.733 user 0m4.170s 00:04:11.733 sys 0m0.719s 00:04:11.733 22:44:50 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.733 ************************************ 00:04:11.733 22:44:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.733 END TEST rpc 00:04:11.733 ************************************ 00:04:11.733 22:44:50 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:11.733 22:44:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.733 22:44:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.733 22:44:50 -- common/autotest_common.sh@10 -- # set +x 00:04:11.733 ************************************ 00:04:11.733 START TEST skip_rpc 00:04:11.733 ************************************ 00:04:11.733 22:44:50 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:11.733 * Looking for test storage... 00:04:11.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.733 22:44:50 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:11.733 22:44:50 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:11.733 22:44:50 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:11.733 22:44:50 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.733 22:44:50 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:11.733 22:44:50 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.733 22:44:50 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:11.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.733 --rc genhtml_branch_coverage=1 00:04:11.733 --rc genhtml_function_coverage=1 00:04:11.733 --rc genhtml_legend=1 00:04:11.733 --rc geninfo_all_blocks=1 00:04:11.733 --rc geninfo_unexecuted_blocks=1 00:04:11.733 00:04:11.733 ' 00:04:11.733 22:44:50 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:11.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.733 --rc genhtml_branch_coverage=1 00:04:11.733 --rc genhtml_function_coverage=1 00:04:11.733 --rc genhtml_legend=1 00:04:11.733 --rc geninfo_all_blocks=1 00:04:11.733 --rc geninfo_unexecuted_blocks=1 00:04:11.734 00:04:11.734 ' 00:04:11.734 22:44:50 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:11.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.734 --rc genhtml_branch_coverage=1 00:04:11.734 --rc genhtml_function_coverage=1 00:04:11.734 --rc genhtml_legend=1 00:04:11.734 --rc geninfo_all_blocks=1 00:04:11.734 --rc geninfo_unexecuted_blocks=1 00:04:11.734 00:04:11.734 ' 00:04:11.734 22:44:50 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:11.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.734 --rc genhtml_branch_coverage=1 00:04:11.734 --rc genhtml_function_coverage=1 00:04:11.734 --rc genhtml_legend=1 00:04:11.734 --rc geninfo_all_blocks=1 00:04:11.734 --rc geninfo_unexecuted_blocks=1 00:04:11.734 00:04:11.734 ' 00:04:11.734 22:44:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:11.734 22:44:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:11.734 22:44:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:11.734 22:44:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.734 22:44:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.734 22:44:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.734 ************************************ 00:04:11.734 START TEST skip_rpc 00:04:11.734 ************************************ 00:04:11.734 22:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:11.734 22:44:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59129 00:04:11.734 22:44:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.734 22:44:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:11.734 22:44:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:11.734 [2024-12-13 22:44:50.805848] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:11.734 [2024-12-13 22:44:50.806355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59129 ] 00:04:11.992 [2024-12-13 22:44:50.963585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.992 [2024-12-13 22:44:51.045797] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59129 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59129 ']' 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59129 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59129 00:04:17.257 killing process with pid 59129 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59129' 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59129 00:04:17.257 22:44:55 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59129 00:04:17.827 ************************************ 00:04:17.827 END TEST skip_rpc 00:04:17.827 ************************************ 00:04:17.827 00:04:17.827 real 0m6.196s 00:04:17.827 user 0m5.836s 00:04:17.827 sys 0m0.261s 00:04:17.827 22:44:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.827 22:44:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.086 22:44:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:18.086 22:44:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.086 22:44:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.086 22:44:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.086 ************************************ 00:04:18.086 START TEST skip_rpc_with_json 00:04:18.086 ************************************ 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59226 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59226 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59226 ']' 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.086 22:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.086 [2024-12-13 22:44:57.054695] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:18.086 [2024-12-13 22:44:57.054832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59226 ] 00:04:18.086 [2024-12-13 22:44:57.211720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.345 [2024-12-13 22:44:57.300015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.911 [2024-12-13 22:44:57.894332] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:18.911 request: 00:04:18.911 { 00:04:18.911 "trtype": "tcp", 00:04:18.911 "method": "nvmf_get_transports", 00:04:18.911 "req_id": 1 00:04:18.911 } 00:04:18.911 Got JSON-RPC error response 00:04:18.911 response: 00:04:18.911 { 00:04:18.911 "code": -19, 00:04:18.911 "message": "No such device" 00:04:18.911 } 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.911 [2024-12-13 22:44:57.902390] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.911 22:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.170 22:44:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.170 22:44:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.170 { 00:04:19.170 "subsystems": [ 00:04:19.170 { 00:04:19.170 "subsystem": "fsdev", 00:04:19.170 "config": [ 00:04:19.170 { 00:04:19.170 "method": "fsdev_set_opts", 00:04:19.170 "params": { 00:04:19.170 "fsdev_io_pool_size": 65535, 00:04:19.170 "fsdev_io_cache_size": 256 00:04:19.170 } 00:04:19.170 } 00:04:19.170 ] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "keyring", 00:04:19.170 "config": [] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "iobuf", 00:04:19.170 "config": [ 00:04:19.170 { 00:04:19.170 "method": "iobuf_set_options", 00:04:19.170 "params": { 00:04:19.170 "small_pool_count": 8192, 00:04:19.170 "large_pool_count": 1024, 00:04:19.170 "small_bufsize": 8192, 00:04:19.170 "large_bufsize": 135168, 00:04:19.170 "enable_numa": false 00:04:19.170 } 00:04:19.170 } 00:04:19.170 ] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "sock", 00:04:19.170 "config": [ 00:04:19.170 { 00:04:19.170 "method": "sock_set_default_impl", 00:04:19.170 "params": { 00:04:19.170 "impl_name": "posix" 00:04:19.170 } 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "method": "sock_impl_set_options", 00:04:19.170 "params": { 00:04:19.170 "impl_name": "ssl", 00:04:19.170 "recv_buf_size": 4096, 00:04:19.170 "send_buf_size": 4096, 00:04:19.170 "enable_recv_pipe": true, 00:04:19.170 "enable_quickack": false, 00:04:19.170 "enable_placement_id": 0, 00:04:19.170 "enable_zerocopy_send_server": true, 00:04:19.170 "enable_zerocopy_send_client": false, 00:04:19.170 "zerocopy_threshold": 0, 00:04:19.170 "tls_version": 0, 00:04:19.170 "enable_ktls": false 00:04:19.170 } 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "method": "sock_impl_set_options", 00:04:19.170 "params": { 00:04:19.170 "impl_name": "posix", 00:04:19.170 "recv_buf_size": 2097152, 00:04:19.170 "send_buf_size": 2097152, 00:04:19.170 "enable_recv_pipe": true, 00:04:19.170 "enable_quickack": false, 00:04:19.170 "enable_placement_id": 0, 00:04:19.170 "enable_zerocopy_send_server": true, 00:04:19.170 "enable_zerocopy_send_client": false, 00:04:19.170 "zerocopy_threshold": 0, 00:04:19.170 "tls_version": 0, 00:04:19.170 "enable_ktls": false 00:04:19.170 } 00:04:19.170 } 00:04:19.170 ] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "vmd", 00:04:19.170 "config": [] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "accel", 00:04:19.170 "config": [ 00:04:19.170 { 00:04:19.170 "method": "accel_set_options", 00:04:19.170 "params": { 00:04:19.170 "small_cache_size": 128, 00:04:19.170 "large_cache_size": 16, 00:04:19.170 "task_count": 2048, 00:04:19.170 "sequence_count": 2048, 00:04:19.170 "buf_count": 2048 00:04:19.170 } 00:04:19.170 } 00:04:19.170 ] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "bdev", 00:04:19.170 "config": [ 00:04:19.170 { 00:04:19.170 "method": "bdev_set_options", 00:04:19.170 "params": { 00:04:19.170 "bdev_io_pool_size": 65535, 00:04:19.170 "bdev_io_cache_size": 256, 00:04:19.170 "bdev_auto_examine": true, 00:04:19.170 "iobuf_small_cache_size": 128, 00:04:19.170 "iobuf_large_cache_size": 16 00:04:19.170 } 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "method": "bdev_raid_set_options", 00:04:19.170 "params": { 00:04:19.170 "process_window_size_kb": 1024, 00:04:19.170 "process_max_bandwidth_mb_sec": 0 00:04:19.170 } 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "method": "bdev_iscsi_set_options", 00:04:19.170 "params": { 00:04:19.170 "timeout_sec": 30 00:04:19.170 } 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "method": "bdev_nvme_set_options", 00:04:19.170 "params": { 00:04:19.170 "action_on_timeout": "none", 00:04:19.170 "timeout_us": 0, 00:04:19.170 "timeout_admin_us": 0, 00:04:19.170 "keep_alive_timeout_ms": 10000, 00:04:19.170 "arbitration_burst": 0, 00:04:19.170 "low_priority_weight": 0, 00:04:19.170 "medium_priority_weight": 0, 00:04:19.170 "high_priority_weight": 0, 00:04:19.170 "nvme_adminq_poll_period_us": 10000, 00:04:19.170 "nvme_ioq_poll_period_us": 0, 00:04:19.170 "io_queue_requests": 0, 00:04:19.170 "delay_cmd_submit": true, 00:04:19.170 "transport_retry_count": 4, 00:04:19.170 "bdev_retry_count": 3, 00:04:19.170 "transport_ack_timeout": 0, 00:04:19.170 "ctrlr_loss_timeout_sec": 0, 00:04:19.170 "reconnect_delay_sec": 0, 00:04:19.170 "fast_io_fail_timeout_sec": 0, 00:04:19.170 "disable_auto_failback": false, 00:04:19.170 "generate_uuids": false, 00:04:19.170 "transport_tos": 0, 00:04:19.170 "nvme_error_stat": false, 00:04:19.170 "rdma_srq_size": 0, 00:04:19.170 "io_path_stat": false, 00:04:19.170 "allow_accel_sequence": false, 00:04:19.170 "rdma_max_cq_size": 0, 00:04:19.170 "rdma_cm_event_timeout_ms": 0, 00:04:19.170 "dhchap_digests": [ 00:04:19.170 "sha256", 00:04:19.170 "sha384", 00:04:19.170 "sha512" 00:04:19.170 ], 00:04:19.170 "dhchap_dhgroups": [ 00:04:19.170 "null", 00:04:19.170 "ffdhe2048", 00:04:19.170 "ffdhe3072", 00:04:19.170 "ffdhe4096", 00:04:19.170 "ffdhe6144", 00:04:19.170 "ffdhe8192" 00:04:19.170 ], 00:04:19.170 "rdma_umr_per_io": false 00:04:19.170 } 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "method": "bdev_nvme_set_hotplug", 00:04:19.170 "params": { 00:04:19.170 "period_us": 100000, 00:04:19.170 "enable": false 00:04:19.170 } 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "method": "bdev_wait_for_examine" 00:04:19.170 } 00:04:19.170 ] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "scsi", 00:04:19.170 "config": null 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "scheduler", 00:04:19.170 "config": [ 00:04:19.170 { 00:04:19.170 "method": "framework_set_scheduler", 00:04:19.170 "params": { 00:04:19.170 "name": "static" 00:04:19.170 } 00:04:19.170 } 00:04:19.170 ] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "vhost_scsi", 00:04:19.170 "config": [] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "vhost_blk", 00:04:19.170 "config": [] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "ublk", 00:04:19.170 "config": [] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "nbd", 00:04:19.170 "config": [] 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "subsystem": "nvmf", 00:04:19.170 "config": [ 00:04:19.170 { 00:04:19.170 "method": "nvmf_set_config", 00:04:19.170 "params": { 00:04:19.170 "discovery_filter": "match_any", 00:04:19.170 "admin_cmd_passthru": { 00:04:19.170 "identify_ctrlr": false 00:04:19.170 }, 00:04:19.170 "dhchap_digests": [ 00:04:19.170 "sha256", 00:04:19.170 "sha384", 00:04:19.170 "sha512" 00:04:19.170 ], 00:04:19.170 "dhchap_dhgroups": [ 00:04:19.170 "null", 00:04:19.170 "ffdhe2048", 00:04:19.170 "ffdhe3072", 00:04:19.170 "ffdhe4096", 00:04:19.170 "ffdhe6144", 00:04:19.170 "ffdhe8192" 00:04:19.170 ] 00:04:19.170 } 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "method": "nvmf_set_max_subsystems", 00:04:19.170 "params": { 00:04:19.170 "max_subsystems": 1024 00:04:19.170 } 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "method": "nvmf_set_crdt", 00:04:19.170 "params": { 00:04:19.170 "crdt1": 0, 00:04:19.170 "crdt2": 0, 00:04:19.170 "crdt3": 0 00:04:19.170 } 00:04:19.170 }, 00:04:19.170 { 00:04:19.170 "method": "nvmf_create_transport", 00:04:19.170 "params": { 00:04:19.170 "trtype": "TCP", 00:04:19.170 "max_queue_depth": 128, 00:04:19.170 "max_io_qpairs_per_ctrlr": 127, 00:04:19.170 "in_capsule_data_size": 4096, 00:04:19.170 "max_io_size": 131072, 00:04:19.171 "io_unit_size": 131072, 00:04:19.171 "max_aq_depth": 128, 00:04:19.171 "num_shared_buffers": 511, 00:04:19.171 "buf_cache_size": 4294967295, 00:04:19.171 "dif_insert_or_strip": false, 00:04:19.171 "zcopy": false, 00:04:19.171 "c2h_success": true, 00:04:19.171 "sock_priority": 0, 00:04:19.171 "abort_timeout_sec": 1, 00:04:19.171 "ack_timeout": 0, 00:04:19.171 "data_wr_pool_size": 0 00:04:19.171 } 00:04:19.171 } 00:04:19.171 ] 00:04:19.171 }, 00:04:19.171 { 00:04:19.171 "subsystem": "iscsi", 00:04:19.171 "config": [ 00:04:19.171 { 00:04:19.171 "method": "iscsi_set_options", 00:04:19.171 "params": { 00:04:19.171 "node_base": "iqn.2016-06.io.spdk", 00:04:19.171 "max_sessions": 128, 00:04:19.171 "max_connections_per_session": 2, 00:04:19.171 "max_queue_depth": 64, 00:04:19.171 "default_time2wait": 2, 00:04:19.171 "default_time2retain": 20, 00:04:19.171 "first_burst_length": 8192, 00:04:19.171 "immediate_data": true, 00:04:19.171 "allow_duplicated_isid": false, 00:04:19.171 "error_recovery_level": 0, 00:04:19.171 "nop_timeout": 60, 00:04:19.171 "nop_in_interval": 30, 00:04:19.171 "disable_chap": false, 00:04:19.171 "require_chap": false, 00:04:19.171 "mutual_chap": false, 00:04:19.171 "chap_group": 0, 00:04:19.171 "max_large_datain_per_connection": 64, 00:04:19.171 "max_r2t_per_connection": 4, 00:04:19.171 "pdu_pool_size": 36864, 00:04:19.171 "immediate_data_pool_size": 16384, 00:04:19.171 "data_out_pool_size": 2048 00:04:19.171 } 00:04:19.171 } 00:04:19.171 ] 00:04:19.171 } 00:04:19.171 ] 00:04:19.171 } 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59226 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59226 ']' 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59226 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59226 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.171 killing process with pid 59226 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59226' 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59226 00:04:19.171 22:44:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59226 00:04:20.108 22:44:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59261 00:04:20.108 22:44:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:20.108 22:44:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.376 22:45:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59261 00:04:25.376 22:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59261 ']' 00:04:25.376 22:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59261 00:04:25.376 22:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:25.376 22:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.376 22:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59261 00:04:25.376 22:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.376 killing process with pid 59261 00:04:25.376 22:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.376 22:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59261' 00:04:25.376 22:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59261 00:04:25.376 22:45:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59261 00:04:26.312 22:45:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.573 00:04:26.573 real 0m8.474s 00:04:26.573 user 0m8.116s 00:04:26.573 sys 0m0.584s 00:04:26.573 ************************************ 00:04:26.573 END TEST skip_rpc_with_json 00:04:26.573 ************************************ 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.573 22:45:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:26.573 22:45:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.573 22:45:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.573 22:45:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.573 ************************************ 00:04:26.573 START TEST skip_rpc_with_delay 00:04:26.573 ************************************ 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.573 [2024-12-13 22:45:05.576575] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:26.573 00:04:26.573 real 0m0.104s 00:04:26.573 user 0m0.057s 00:04:26.573 sys 0m0.046s 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.573 22:45:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:26.573 ************************************ 00:04:26.573 END TEST skip_rpc_with_delay 00:04:26.573 ************************************ 00:04:26.573 22:45:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:26.573 22:45:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:26.573 22:45:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:26.573 22:45:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.573 22:45:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.573 22:45:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.573 ************************************ 00:04:26.573 START TEST exit_on_failed_rpc_init 00:04:26.574 ************************************ 00:04:26.574 22:45:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:26.574 22:45:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59378 00:04:26.574 22:45:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59378 00:04:26.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.574 22:45:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59378 ']' 00:04:26.574 22:45:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.574 22:45:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.574 22:45:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.574 22:45:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.574 22:45:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.574 22:45:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.833 [2024-12-13 22:45:05.755785] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:26.833 [2024-12-13 22:45:05.756047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59378 ] 00:04:26.833 [2024-12-13 22:45:05.915124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.092 [2024-12-13 22:45:06.002432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:27.659 22:45:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.659 [2024-12-13 22:45:06.678450] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:27.659 [2024-12-13 22:45:06.678566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59396 ] 00:04:27.953 [2024-12-13 22:45:06.839202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.953 [2024-12-13 22:45:06.940547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.953 [2024-12-13 22:45:06.940625] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:27.953 [2024-12-13 22:45:06.940639] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:27.953 [2024-12-13 22:45:06.940652] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59378 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59378 ']' 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59378 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59378 00:04:28.211 killing process with pid 59378 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59378' 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59378 00:04:28.211 22:45:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59378 00:04:29.586 00:04:29.586 real 0m2.648s 00:04:29.586 user 0m2.989s 00:04:29.586 sys 0m0.414s 00:04:29.586 22:45:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.586 ************************************ 00:04:29.586 END TEST exit_on_failed_rpc_init 00:04:29.586 ************************************ 00:04:29.586 22:45:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.586 22:45:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:29.586 00:04:29.586 real 0m17.801s 00:04:29.586 user 0m17.142s 00:04:29.586 sys 0m1.482s 00:04:29.586 22:45:08 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.586 ************************************ 00:04:29.586 END TEST skip_rpc 00:04:29.586 ************************************ 00:04:29.586 22:45:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.586 22:45:08 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.586 22:45:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.586 22:45:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.586 22:45:08 -- common/autotest_common.sh@10 -- # set +x 00:04:29.586 ************************************ 00:04:29.586 START TEST rpc_client 00:04:29.586 ************************************ 00:04:29.586 22:45:08 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.586 * Looking for test storage... 00:04:29.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:29.586 22:45:08 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.586 22:45:08 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.586 22:45:08 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.586 22:45:08 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:29.586 22:45:08 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.587 22:45:08 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:29.587 22:45:08 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:29.587 22:45:08 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.587 22:45:08 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:29.587 22:45:08 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.587 22:45:08 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.587 22:45:08 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.587 22:45:08 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:29.587 22:45:08 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.587 22:45:08 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.587 --rc genhtml_branch_coverage=1 00:04:29.587 --rc genhtml_function_coverage=1 00:04:29.587 --rc genhtml_legend=1 00:04:29.587 --rc geninfo_all_blocks=1 00:04:29.587 --rc geninfo_unexecuted_blocks=1 00:04:29.587 00:04:29.587 ' 00:04:29.587 22:45:08 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.587 --rc genhtml_branch_coverage=1 00:04:29.587 --rc genhtml_function_coverage=1 00:04:29.587 --rc genhtml_legend=1 00:04:29.587 --rc geninfo_all_blocks=1 00:04:29.587 --rc geninfo_unexecuted_blocks=1 00:04:29.587 00:04:29.587 ' 00:04:29.587 22:45:08 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.587 --rc genhtml_branch_coverage=1 00:04:29.587 --rc genhtml_function_coverage=1 00:04:29.587 --rc genhtml_legend=1 00:04:29.587 --rc geninfo_all_blocks=1 00:04:29.587 --rc geninfo_unexecuted_blocks=1 00:04:29.587 00:04:29.587 ' 00:04:29.587 22:45:08 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.587 --rc genhtml_branch_coverage=1 00:04:29.587 --rc genhtml_function_coverage=1 00:04:29.587 --rc genhtml_legend=1 00:04:29.587 --rc geninfo_all_blocks=1 00:04:29.587 --rc geninfo_unexecuted_blocks=1 00:04:29.587 00:04:29.587 ' 00:04:29.587 22:45:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:29.587 OK 00:04:29.587 22:45:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:29.587 00:04:29.587 real 0m0.187s 00:04:29.587 user 0m0.110s 00:04:29.587 sys 0m0.080s 00:04:29.587 22:45:08 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.587 22:45:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:29.587 ************************************ 00:04:29.587 END TEST rpc_client 00:04:29.587 ************************************ 00:04:29.587 22:45:08 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:29.587 22:45:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.587 22:45:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.587 22:45:08 -- common/autotest_common.sh@10 -- # set +x 00:04:29.587 ************************************ 00:04:29.587 START TEST json_config 00:04:29.587 ************************************ 00:04:29.587 22:45:08 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:29.587 22:45:08 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.587 22:45:08 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.587 22:45:08 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.849 22:45:08 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.849 22:45:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.849 22:45:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.849 22:45:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.849 22:45:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.849 22:45:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.849 22:45:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.849 22:45:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.849 22:45:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.849 22:45:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.849 22:45:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.849 22:45:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.849 22:45:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:29.849 22:45:08 json_config -- scripts/common.sh@345 -- # : 1 00:04:29.849 22:45:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.849 22:45:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.849 22:45:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:29.849 22:45:08 json_config -- scripts/common.sh@353 -- # local d=1 00:04:29.849 22:45:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.849 22:45:08 json_config -- scripts/common.sh@355 -- # echo 1 00:04:29.849 22:45:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.849 22:45:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:29.849 22:45:08 json_config -- scripts/common.sh@353 -- # local d=2 00:04:29.849 22:45:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.849 22:45:08 json_config -- scripts/common.sh@355 -- # echo 2 00:04:29.849 22:45:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.849 22:45:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.849 22:45:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.849 22:45:08 json_config -- scripts/common.sh@368 -- # return 0 00:04:29.849 22:45:08 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.849 22:45:08 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.849 --rc genhtml_branch_coverage=1 00:04:29.849 --rc genhtml_function_coverage=1 00:04:29.849 --rc genhtml_legend=1 00:04:29.849 --rc geninfo_all_blocks=1 00:04:29.849 --rc geninfo_unexecuted_blocks=1 00:04:29.849 00:04:29.849 ' 00:04:29.849 22:45:08 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.849 --rc genhtml_branch_coverage=1 00:04:29.849 --rc genhtml_function_coverage=1 00:04:29.849 --rc genhtml_legend=1 00:04:29.849 --rc geninfo_all_blocks=1 00:04:29.849 --rc geninfo_unexecuted_blocks=1 00:04:29.849 00:04:29.849 ' 00:04:29.849 22:45:08 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.849 --rc genhtml_branch_coverage=1 00:04:29.849 --rc genhtml_function_coverage=1 00:04:29.849 --rc genhtml_legend=1 00:04:29.849 --rc geninfo_all_blocks=1 00:04:29.849 --rc geninfo_unexecuted_blocks=1 00:04:29.849 00:04:29.849 ' 00:04:29.849 22:45:08 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.849 --rc genhtml_branch_coverage=1 00:04:29.849 --rc genhtml_function_coverage=1 00:04:29.849 --rc genhtml_legend=1 00:04:29.849 --rc geninfo_all_blocks=1 00:04:29.849 --rc geninfo_unexecuted_blocks=1 00:04:29.849 00:04:29.849 ' 00:04:29.849 22:45:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ffb3371-6748-4348-a0e1-15b4033c0cdb 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1ffb3371-6748-4348-a0e1-15b4033c0cdb 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:29.849 22:45:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:29.849 22:45:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.849 22:45:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.849 22:45:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.849 22:45:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.849 22:45:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.849 22:45:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.849 22:45:08 json_config -- paths/export.sh@5 -- # export PATH 00:04:29.849 22:45:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@51 -- # : 0 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:29.849 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:29.849 22:45:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:29.849 22:45:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:29.849 22:45:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:29.849 22:45:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:29.849 22:45:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:29.849 22:45:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:29.849 22:45:08 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:29.849 WARNING: No tests are enabled so not running JSON configuration tests 00:04:29.849 22:45:08 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:29.849 00:04:29.849 real 0m0.132s 00:04:29.849 user 0m0.085s 00:04:29.849 sys 0m0.049s 00:04:29.849 22:45:08 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.849 22:45:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.849 ************************************ 00:04:29.849 END TEST json_config 00:04:29.849 ************************************ 00:04:29.849 22:45:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:29.850 22:45:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.850 22:45:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.850 22:45:08 -- common/autotest_common.sh@10 -- # set +x 00:04:29.850 ************************************ 00:04:29.850 START TEST json_config_extra_key 00:04:29.850 ************************************ 00:04:29.850 22:45:08 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:29.850 22:45:08 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.850 22:45:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.850 22:45:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.850 22:45:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:29.850 22:45:08 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.850 22:45:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.850 --rc genhtml_branch_coverage=1 00:04:29.850 --rc genhtml_function_coverage=1 00:04:29.850 --rc genhtml_legend=1 00:04:29.850 --rc geninfo_all_blocks=1 00:04:29.850 --rc geninfo_unexecuted_blocks=1 00:04:29.850 00:04:29.850 ' 00:04:29.850 22:45:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.850 --rc genhtml_branch_coverage=1 00:04:29.850 --rc genhtml_function_coverage=1 00:04:29.850 --rc genhtml_legend=1 00:04:29.850 --rc geninfo_all_blocks=1 00:04:29.850 --rc geninfo_unexecuted_blocks=1 00:04:29.850 00:04:29.850 ' 00:04:29.850 22:45:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.850 --rc genhtml_branch_coverage=1 00:04:29.850 --rc genhtml_function_coverage=1 00:04:29.850 --rc genhtml_legend=1 00:04:29.850 --rc geninfo_all_blocks=1 00:04:29.850 --rc geninfo_unexecuted_blocks=1 00:04:29.850 00:04:29.850 ' 00:04:29.850 22:45:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.850 --rc genhtml_branch_coverage=1 00:04:29.850 --rc genhtml_function_coverage=1 00:04:29.850 --rc genhtml_legend=1 00:04:29.850 --rc geninfo_all_blocks=1 00:04:29.850 --rc geninfo_unexecuted_blocks=1 00:04:29.850 00:04:29.850 ' 00:04:29.850 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ffb3371-6748-4348-a0e1-15b4033c0cdb 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1ffb3371-6748-4348-a0e1-15b4033c0cdb 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.850 22:45:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.850 22:45:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.850 22:45:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.850 22:45:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.850 22:45:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:29.850 22:45:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.850 22:45:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:29.850 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.112 22:45:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.112 22:45:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.112 22:45:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:30.112 INFO: launching applications... 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:30.112 22:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:30.112 22:45:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:30.112 22:45:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:30.112 22:45:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.112 22:45:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.112 22:45:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.112 22:45:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.112 22:45:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.112 22:45:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59590 00:04:30.112 22:45:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.112 Waiting for target to run... 00:04:30.112 22:45:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59590 /var/tmp/spdk_tgt.sock 00:04:30.112 22:45:08 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59590 ']' 00:04:30.112 22:45:08 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.112 22:45:08 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.112 22:45:08 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.112 22:45:08 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.112 22:45:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:30.112 22:45:08 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:30.112 [2024-12-13 22:45:09.074308] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:30.112 [2024-12-13 22:45:09.074714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59590 ] 00:04:30.374 [2024-12-13 22:45:09.485372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.635 [2024-12-13 22:45:09.632168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.208 22:45:10 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.208 00:04:31.208 INFO: shutting down applications... 00:04:31.208 22:45:10 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:31.208 22:45:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:31.208 22:45:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:31.208 22:45:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:31.208 22:45:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:31.208 22:45:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.208 22:45:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59590 ]] 00:04:31.208 22:45:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59590 00:04:31.208 22:45:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.208 22:45:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.208 22:45:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59590 00:04:31.208 22:45:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:31.781 22:45:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:31.781 22:45:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.781 22:45:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59590 00:04:31.781 22:45:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.353 22:45:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.353 22:45:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.353 22:45:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59590 00:04:32.353 22:45:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.615 22:45:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.615 22:45:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.616 22:45:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59590 00:04:32.616 22:45:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.188 22:45:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.188 22:45:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.188 22:45:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59590 00:04:33.188 22:45:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.760 22:45:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.760 22:45:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.760 22:45:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59590 00:04:33.760 22:45:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.760 22:45:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:33.760 22:45:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.760 SPDK target shutdown done 00:04:33.760 Success 00:04:33.760 22:45:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.760 22:45:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:33.760 ************************************ 00:04:33.760 END TEST json_config_extra_key 00:04:33.760 ************************************ 00:04:33.760 00:04:33.760 real 0m3.900s 00:04:33.760 user 0m3.309s 00:04:33.760 sys 0m0.555s 00:04:33.760 22:45:12 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.760 22:45:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.760 22:45:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.760 22:45:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.760 22:45:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.760 22:45:12 -- common/autotest_common.sh@10 -- # set +x 00:04:33.760 ************************************ 00:04:33.760 START TEST alias_rpc 00:04:33.760 ************************************ 00:04:33.760 22:45:12 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.760 * Looking for test storage... 00:04:33.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:33.760 22:45:12 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:33.760 22:45:12 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:33.760 22:45:12 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.020 22:45:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.020 --rc genhtml_branch_coverage=1 00:04:34.020 --rc genhtml_function_coverage=1 00:04:34.020 --rc genhtml_legend=1 00:04:34.020 --rc geninfo_all_blocks=1 00:04:34.020 --rc geninfo_unexecuted_blocks=1 00:04:34.020 00:04:34.020 ' 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.020 --rc genhtml_branch_coverage=1 00:04:34.020 --rc genhtml_function_coverage=1 00:04:34.020 --rc genhtml_legend=1 00:04:34.020 --rc geninfo_all_blocks=1 00:04:34.020 --rc geninfo_unexecuted_blocks=1 00:04:34.020 00:04:34.020 ' 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.020 --rc genhtml_branch_coverage=1 00:04:34.020 --rc genhtml_function_coverage=1 00:04:34.020 --rc genhtml_legend=1 00:04:34.020 --rc geninfo_all_blocks=1 00:04:34.020 --rc geninfo_unexecuted_blocks=1 00:04:34.020 00:04:34.020 ' 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.020 --rc genhtml_branch_coverage=1 00:04:34.020 --rc genhtml_function_coverage=1 00:04:34.020 --rc genhtml_legend=1 00:04:34.020 --rc geninfo_all_blocks=1 00:04:34.020 --rc geninfo_unexecuted_blocks=1 00:04:34.020 00:04:34.020 ' 00:04:34.020 22:45:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:34.020 22:45:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59695 00:04:34.020 22:45:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.020 22:45:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59695 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59695 ']' 00:04:34.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.020 22:45:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.020 [2024-12-13 22:45:13.001786] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:34.020 [2024-12-13 22:45:13.002100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59695 ] 00:04:34.279 [2024-12-13 22:45:13.161221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.279 [2024-12-13 22:45:13.268099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.851 22:45:13 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.851 22:45:13 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.851 22:45:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:35.113 22:45:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59695 00:04:35.113 22:45:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59695 ']' 00:04:35.113 22:45:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59695 00:04:35.113 22:45:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:35.113 22:45:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.113 22:45:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59695 00:04:35.113 killing process with pid 59695 00:04:35.113 22:45:14 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.113 22:45:14 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.113 22:45:14 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59695' 00:04:35.113 22:45:14 alias_rpc -- common/autotest_common.sh@973 -- # kill 59695 00:04:35.113 22:45:14 alias_rpc -- common/autotest_common.sh@978 -- # wait 59695 00:04:37.057 ************************************ 00:04:37.057 END TEST alias_rpc 00:04:37.057 ************************************ 00:04:37.058 00:04:37.058 real 0m3.125s 00:04:37.058 user 0m3.126s 00:04:37.058 sys 0m0.496s 00:04:37.058 22:45:15 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.058 22:45:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.058 22:45:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:37.058 22:45:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:37.058 22:45:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.058 22:45:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.058 22:45:15 -- common/autotest_common.sh@10 -- # set +x 00:04:37.058 ************************************ 00:04:37.058 START TEST spdkcli_tcp 00:04:37.058 ************************************ 00:04:37.058 22:45:15 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:37.058 * Looking for test storage... 00:04:37.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.058 22:45:16 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.058 --rc genhtml_branch_coverage=1 00:04:37.058 --rc genhtml_function_coverage=1 00:04:37.058 --rc genhtml_legend=1 00:04:37.058 --rc geninfo_all_blocks=1 00:04:37.058 --rc geninfo_unexecuted_blocks=1 00:04:37.058 00:04:37.058 ' 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.058 --rc genhtml_branch_coverage=1 00:04:37.058 --rc genhtml_function_coverage=1 00:04:37.058 --rc genhtml_legend=1 00:04:37.058 --rc geninfo_all_blocks=1 00:04:37.058 --rc geninfo_unexecuted_blocks=1 00:04:37.058 00:04:37.058 ' 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.058 --rc genhtml_branch_coverage=1 00:04:37.058 --rc genhtml_function_coverage=1 00:04:37.058 --rc genhtml_legend=1 00:04:37.058 --rc geninfo_all_blocks=1 00:04:37.058 --rc geninfo_unexecuted_blocks=1 00:04:37.058 00:04:37.058 ' 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.058 --rc genhtml_branch_coverage=1 00:04:37.058 --rc genhtml_function_coverage=1 00:04:37.058 --rc genhtml_legend=1 00:04:37.058 --rc geninfo_all_blocks=1 00:04:37.058 --rc geninfo_unexecuted_blocks=1 00:04:37.058 00:04:37.058 ' 00:04:37.058 22:45:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:37.058 22:45:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:37.058 22:45:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:37.058 22:45:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:37.058 22:45:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:37.058 22:45:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:37.058 22:45:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.058 22:45:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59791 00:04:37.058 22:45:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59791 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59791 ']' 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.058 22:45:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.058 22:45:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:37.321 [2024-12-13 22:45:16.223717] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:37.321 [2024-12-13 22:45:16.224127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59791 ] 00:04:37.321 [2024-12-13 22:45:16.388156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.584 [2024-12-13 22:45:16.519852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.584 [2024-12-13 22:45:16.519896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.155 22:45:17 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.155 22:45:17 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:38.155 22:45:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59808 00:04:38.155 22:45:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:38.155 22:45:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:38.417 [ 00:04:38.417 "bdev_malloc_delete", 00:04:38.417 "bdev_malloc_create", 00:04:38.417 "bdev_null_resize", 00:04:38.417 "bdev_null_delete", 00:04:38.417 "bdev_null_create", 00:04:38.417 "bdev_nvme_cuse_unregister", 00:04:38.417 "bdev_nvme_cuse_register", 00:04:38.417 "bdev_opal_new_user", 00:04:38.417 "bdev_opal_set_lock_state", 00:04:38.417 "bdev_opal_delete", 00:04:38.417 "bdev_opal_get_info", 00:04:38.417 "bdev_opal_create", 00:04:38.417 "bdev_nvme_opal_revert", 00:04:38.417 "bdev_nvme_opal_init", 00:04:38.417 "bdev_nvme_send_cmd", 00:04:38.417 "bdev_nvme_set_keys", 00:04:38.417 "bdev_nvme_get_path_iostat", 00:04:38.417 "bdev_nvme_get_mdns_discovery_info", 00:04:38.417 "bdev_nvme_stop_mdns_discovery", 00:04:38.417 "bdev_nvme_start_mdns_discovery", 00:04:38.417 "bdev_nvme_set_multipath_policy", 00:04:38.417 "bdev_nvme_set_preferred_path", 00:04:38.417 "bdev_nvme_get_io_paths", 00:04:38.417 "bdev_nvme_remove_error_injection", 00:04:38.417 "bdev_nvme_add_error_injection", 00:04:38.417 "bdev_nvme_get_discovery_info", 00:04:38.417 "bdev_nvme_stop_discovery", 00:04:38.417 "bdev_nvme_start_discovery", 00:04:38.417 "bdev_nvme_get_controller_health_info", 00:04:38.417 "bdev_nvme_disable_controller", 00:04:38.417 "bdev_nvme_enable_controller", 00:04:38.417 "bdev_nvme_reset_controller", 00:04:38.417 "bdev_nvme_get_transport_statistics", 00:04:38.417 "bdev_nvme_apply_firmware", 00:04:38.417 "bdev_nvme_detach_controller", 00:04:38.417 "bdev_nvme_get_controllers", 00:04:38.417 "bdev_nvme_attach_controller", 00:04:38.417 "bdev_nvme_set_hotplug", 00:04:38.417 "bdev_nvme_set_options", 00:04:38.417 "bdev_passthru_delete", 00:04:38.417 "bdev_passthru_create", 00:04:38.417 "bdev_lvol_set_parent_bdev", 00:04:38.417 "bdev_lvol_set_parent", 00:04:38.417 "bdev_lvol_check_shallow_copy", 00:04:38.417 "bdev_lvol_start_shallow_copy", 00:04:38.417 "bdev_lvol_grow_lvstore", 00:04:38.417 "bdev_lvol_get_lvols", 00:04:38.417 "bdev_lvol_get_lvstores", 00:04:38.417 "bdev_lvol_delete", 00:04:38.417 "bdev_lvol_set_read_only", 00:04:38.417 "bdev_lvol_resize", 00:04:38.417 "bdev_lvol_decouple_parent", 00:04:38.417 "bdev_lvol_inflate", 00:04:38.417 "bdev_lvol_rename", 00:04:38.417 "bdev_lvol_clone_bdev", 00:04:38.417 "bdev_lvol_clone", 00:04:38.417 "bdev_lvol_snapshot", 00:04:38.417 "bdev_lvol_create", 00:04:38.417 "bdev_lvol_delete_lvstore", 00:04:38.417 "bdev_lvol_rename_lvstore", 00:04:38.417 "bdev_lvol_create_lvstore", 00:04:38.417 "bdev_raid_set_options", 00:04:38.417 "bdev_raid_remove_base_bdev", 00:04:38.417 "bdev_raid_add_base_bdev", 00:04:38.417 "bdev_raid_delete", 00:04:38.417 "bdev_raid_create", 00:04:38.417 "bdev_raid_get_bdevs", 00:04:38.417 "bdev_error_inject_error", 00:04:38.417 "bdev_error_delete", 00:04:38.417 "bdev_error_create", 00:04:38.417 "bdev_split_delete", 00:04:38.417 "bdev_split_create", 00:04:38.417 "bdev_delay_delete", 00:04:38.417 "bdev_delay_create", 00:04:38.417 "bdev_delay_update_latency", 00:04:38.417 "bdev_zone_block_delete", 00:04:38.417 "bdev_zone_block_create", 00:04:38.417 "blobfs_create", 00:04:38.417 "blobfs_detect", 00:04:38.417 "blobfs_set_cache_size", 00:04:38.417 "bdev_xnvme_delete", 00:04:38.417 "bdev_xnvme_create", 00:04:38.417 "bdev_aio_delete", 00:04:38.417 "bdev_aio_rescan", 00:04:38.417 "bdev_aio_create", 00:04:38.417 "bdev_ftl_set_property", 00:04:38.417 "bdev_ftl_get_properties", 00:04:38.417 "bdev_ftl_get_stats", 00:04:38.417 "bdev_ftl_unmap", 00:04:38.417 "bdev_ftl_unload", 00:04:38.417 "bdev_ftl_delete", 00:04:38.417 "bdev_ftl_load", 00:04:38.417 "bdev_ftl_create", 00:04:38.417 "bdev_virtio_attach_controller", 00:04:38.417 "bdev_virtio_scsi_get_devices", 00:04:38.417 "bdev_virtio_detach_controller", 00:04:38.417 "bdev_virtio_blk_set_hotplug", 00:04:38.417 "bdev_iscsi_delete", 00:04:38.417 "bdev_iscsi_create", 00:04:38.417 "bdev_iscsi_set_options", 00:04:38.417 "accel_error_inject_error", 00:04:38.417 "ioat_scan_accel_module", 00:04:38.417 "dsa_scan_accel_module", 00:04:38.417 "iaa_scan_accel_module", 00:04:38.417 "keyring_file_remove_key", 00:04:38.417 "keyring_file_add_key", 00:04:38.417 "keyring_linux_set_options", 00:04:38.417 "fsdev_aio_delete", 00:04:38.417 "fsdev_aio_create", 00:04:38.417 "iscsi_get_histogram", 00:04:38.417 "iscsi_enable_histogram", 00:04:38.417 "iscsi_set_options", 00:04:38.417 "iscsi_get_auth_groups", 00:04:38.417 "iscsi_auth_group_remove_secret", 00:04:38.417 "iscsi_auth_group_add_secret", 00:04:38.418 "iscsi_delete_auth_group", 00:04:38.418 "iscsi_create_auth_group", 00:04:38.418 "iscsi_set_discovery_auth", 00:04:38.418 "iscsi_get_options", 00:04:38.418 "iscsi_target_node_request_logout", 00:04:38.418 "iscsi_target_node_set_redirect", 00:04:38.418 "iscsi_target_node_set_auth", 00:04:38.418 "iscsi_target_node_add_lun", 00:04:38.418 "iscsi_get_stats", 00:04:38.418 "iscsi_get_connections", 00:04:38.418 "iscsi_portal_group_set_auth", 00:04:38.418 "iscsi_start_portal_group", 00:04:38.418 "iscsi_delete_portal_group", 00:04:38.418 "iscsi_create_portal_group", 00:04:38.418 "iscsi_get_portal_groups", 00:04:38.418 "iscsi_delete_target_node", 00:04:38.418 "iscsi_target_node_remove_pg_ig_maps", 00:04:38.418 "iscsi_target_node_add_pg_ig_maps", 00:04:38.418 "iscsi_create_target_node", 00:04:38.418 "iscsi_get_target_nodes", 00:04:38.418 "iscsi_delete_initiator_group", 00:04:38.418 "iscsi_initiator_group_remove_initiators", 00:04:38.418 "iscsi_initiator_group_add_initiators", 00:04:38.418 "iscsi_create_initiator_group", 00:04:38.418 "iscsi_get_initiator_groups", 00:04:38.418 "nvmf_set_crdt", 00:04:38.418 "nvmf_set_config", 00:04:38.418 "nvmf_set_max_subsystems", 00:04:38.418 "nvmf_stop_mdns_prr", 00:04:38.418 "nvmf_publish_mdns_prr", 00:04:38.418 "nvmf_subsystem_get_listeners", 00:04:38.418 "nvmf_subsystem_get_qpairs", 00:04:38.418 "nvmf_subsystem_get_controllers", 00:04:38.418 "nvmf_get_stats", 00:04:38.418 "nvmf_get_transports", 00:04:38.418 "nvmf_create_transport", 00:04:38.418 "nvmf_get_targets", 00:04:38.418 "nvmf_delete_target", 00:04:38.418 "nvmf_create_target", 00:04:38.418 "nvmf_subsystem_allow_any_host", 00:04:38.418 "nvmf_subsystem_set_keys", 00:04:38.418 "nvmf_subsystem_remove_host", 00:04:38.418 "nvmf_subsystem_add_host", 00:04:38.418 "nvmf_ns_remove_host", 00:04:38.418 "nvmf_ns_add_host", 00:04:38.418 "nvmf_subsystem_remove_ns", 00:04:38.418 "nvmf_subsystem_set_ns_ana_group", 00:04:38.418 "nvmf_subsystem_add_ns", 00:04:38.418 "nvmf_subsystem_listener_set_ana_state", 00:04:38.418 "nvmf_discovery_get_referrals", 00:04:38.418 "nvmf_discovery_remove_referral", 00:04:38.418 "nvmf_discovery_add_referral", 00:04:38.418 "nvmf_subsystem_remove_listener", 00:04:38.418 "nvmf_subsystem_add_listener", 00:04:38.418 "nvmf_delete_subsystem", 00:04:38.418 "nvmf_create_subsystem", 00:04:38.418 "nvmf_get_subsystems", 00:04:38.418 "env_dpdk_get_mem_stats", 00:04:38.418 "nbd_get_disks", 00:04:38.418 "nbd_stop_disk", 00:04:38.418 "nbd_start_disk", 00:04:38.418 "ublk_recover_disk", 00:04:38.418 "ublk_get_disks", 00:04:38.418 "ublk_stop_disk", 00:04:38.418 "ublk_start_disk", 00:04:38.418 "ublk_destroy_target", 00:04:38.418 "ublk_create_target", 00:04:38.418 "virtio_blk_create_transport", 00:04:38.418 "virtio_blk_get_transports", 00:04:38.418 "vhost_controller_set_coalescing", 00:04:38.418 "vhost_get_controllers", 00:04:38.418 "vhost_delete_controller", 00:04:38.418 "vhost_create_blk_controller", 00:04:38.418 "vhost_scsi_controller_remove_target", 00:04:38.418 "vhost_scsi_controller_add_target", 00:04:38.418 "vhost_start_scsi_controller", 00:04:38.418 "vhost_create_scsi_controller", 00:04:38.418 "thread_set_cpumask", 00:04:38.418 "scheduler_set_options", 00:04:38.418 "framework_get_governor", 00:04:38.418 "framework_get_scheduler", 00:04:38.418 "framework_set_scheduler", 00:04:38.418 "framework_get_reactors", 00:04:38.418 "thread_get_io_channels", 00:04:38.418 "thread_get_pollers", 00:04:38.418 "thread_get_stats", 00:04:38.418 "framework_monitor_context_switch", 00:04:38.418 "spdk_kill_instance", 00:04:38.418 "log_enable_timestamps", 00:04:38.418 "log_get_flags", 00:04:38.418 "log_clear_flag", 00:04:38.418 "log_set_flag", 00:04:38.418 "log_get_level", 00:04:38.418 "log_set_level", 00:04:38.418 "log_get_print_level", 00:04:38.418 "log_set_print_level", 00:04:38.418 "framework_enable_cpumask_locks", 00:04:38.418 "framework_disable_cpumask_locks", 00:04:38.418 "framework_wait_init", 00:04:38.418 "framework_start_init", 00:04:38.418 "scsi_get_devices", 00:04:38.418 "bdev_get_histogram", 00:04:38.418 "bdev_enable_histogram", 00:04:38.418 "bdev_set_qos_limit", 00:04:38.418 "bdev_set_qd_sampling_period", 00:04:38.418 "bdev_get_bdevs", 00:04:38.418 "bdev_reset_iostat", 00:04:38.418 "bdev_get_iostat", 00:04:38.418 "bdev_examine", 00:04:38.418 "bdev_wait_for_examine", 00:04:38.418 "bdev_set_options", 00:04:38.418 "accel_get_stats", 00:04:38.418 "accel_set_options", 00:04:38.418 "accel_set_driver", 00:04:38.418 "accel_crypto_key_destroy", 00:04:38.418 "accel_crypto_keys_get", 00:04:38.418 "accel_crypto_key_create", 00:04:38.418 "accel_assign_opc", 00:04:38.418 "accel_get_module_info", 00:04:38.418 "accel_get_opc_assignments", 00:04:38.418 "vmd_rescan", 00:04:38.418 "vmd_remove_device", 00:04:38.418 "vmd_enable", 00:04:38.418 "sock_get_default_impl", 00:04:38.418 "sock_set_default_impl", 00:04:38.418 "sock_impl_set_options", 00:04:38.418 "sock_impl_get_options", 00:04:38.418 "iobuf_get_stats", 00:04:38.418 "iobuf_set_options", 00:04:38.418 "keyring_get_keys", 00:04:38.418 "framework_get_pci_devices", 00:04:38.418 "framework_get_config", 00:04:38.418 "framework_get_subsystems", 00:04:38.418 "fsdev_set_opts", 00:04:38.418 "fsdev_get_opts", 00:04:38.418 "trace_get_info", 00:04:38.418 "trace_get_tpoint_group_mask", 00:04:38.418 "trace_disable_tpoint_group", 00:04:38.418 "trace_enable_tpoint_group", 00:04:38.418 "trace_clear_tpoint_mask", 00:04:38.418 "trace_set_tpoint_mask", 00:04:38.418 "notify_get_notifications", 00:04:38.418 "notify_get_types", 00:04:38.418 "spdk_get_version", 00:04:38.418 "rpc_get_methods" 00:04:38.418 ] 00:04:38.418 22:45:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.418 22:45:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:38.418 22:45:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59791 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59791 ']' 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59791 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59791 00:04:38.418 killing process with pid 59791 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59791' 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59791 00:04:38.418 22:45:17 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59791 00:04:40.328 00:04:40.328 real 0m3.094s 00:04:40.328 user 0m5.455s 00:04:40.328 sys 0m0.569s 00:04:40.328 22:45:19 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.328 ************************************ 00:04:40.328 END TEST spdkcli_tcp 00:04:40.328 ************************************ 00:04:40.328 22:45:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.328 22:45:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:40.328 22:45:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.328 22:45:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.328 22:45:19 -- common/autotest_common.sh@10 -- # set +x 00:04:40.328 ************************************ 00:04:40.328 START TEST dpdk_mem_utility 00:04:40.328 ************************************ 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:40.328 * Looking for test storage... 00:04:40.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.328 22:45:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.328 --rc genhtml_branch_coverage=1 00:04:40.328 --rc genhtml_function_coverage=1 00:04:40.328 --rc genhtml_legend=1 00:04:40.328 --rc geninfo_all_blocks=1 00:04:40.328 --rc geninfo_unexecuted_blocks=1 00:04:40.328 00:04:40.328 ' 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.328 --rc genhtml_branch_coverage=1 00:04:40.328 --rc genhtml_function_coverage=1 00:04:40.328 --rc genhtml_legend=1 00:04:40.328 --rc geninfo_all_blocks=1 00:04:40.328 --rc geninfo_unexecuted_blocks=1 00:04:40.328 00:04:40.328 ' 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.328 --rc genhtml_branch_coverage=1 00:04:40.328 --rc genhtml_function_coverage=1 00:04:40.328 --rc genhtml_legend=1 00:04:40.328 --rc geninfo_all_blocks=1 00:04:40.328 --rc geninfo_unexecuted_blocks=1 00:04:40.328 00:04:40.328 ' 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.328 --rc genhtml_branch_coverage=1 00:04:40.328 --rc genhtml_function_coverage=1 00:04:40.328 --rc genhtml_legend=1 00:04:40.328 --rc geninfo_all_blocks=1 00:04:40.328 --rc geninfo_unexecuted_blocks=1 00:04:40.328 00:04:40.328 ' 00:04:40.328 22:45:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:40.328 22:45:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59902 00:04:40.328 22:45:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59902 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59902 ']' 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.328 22:45:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.328 22:45:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.328 [2024-12-13 22:45:19.317067] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:40.328 [2024-12-13 22:45:19.317180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59902 ] 00:04:40.587 [2024-12-13 22:45:19.475003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.587 [2024-12-13 22:45:19.567240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.155 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.155 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:41.155 22:45:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:41.155 22:45:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:41.155 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.155 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.155 { 00:04:41.155 "filename": "/tmp/spdk_mem_dump.txt" 00:04:41.155 } 00:04:41.155 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.155 22:45:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:41.155 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:41.155 1 heaps totaling size 824.000000 MiB 00:04:41.155 size: 824.000000 MiB heap id: 0 00:04:41.155 end heaps---------- 00:04:41.155 9 mempools totaling size 603.782043 MiB 00:04:41.155 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:41.155 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:41.155 size: 100.555481 MiB name: bdev_io_59902 00:04:41.155 size: 50.003479 MiB name: msgpool_59902 00:04:41.155 size: 36.509338 MiB name: fsdev_io_59902 00:04:41.155 size: 21.763794 MiB name: PDU_Pool 00:04:41.155 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:41.155 size: 4.133484 MiB name: evtpool_59902 00:04:41.156 size: 0.026123 MiB name: Session_Pool 00:04:41.156 end mempools------- 00:04:41.156 6 memzones totaling size 4.142822 MiB 00:04:41.156 size: 1.000366 MiB name: RG_ring_0_59902 00:04:41.156 size: 1.000366 MiB name: RG_ring_1_59902 00:04:41.156 size: 1.000366 MiB name: RG_ring_4_59902 00:04:41.156 size: 1.000366 MiB name: RG_ring_5_59902 00:04:41.156 size: 0.125366 MiB name: RG_ring_2_59902 00:04:41.156 size: 0.015991 MiB name: RG_ring_3_59902 00:04:41.156 end memzones------- 00:04:41.156 22:45:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:41.156 heap id: 0 total size: 824.000000 MiB number of busy elements: 317 number of free elements: 18 00:04:41.156 list of free elements. size: 16.780884 MiB 00:04:41.156 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:41.156 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:41.156 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:41.156 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:41.156 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:41.156 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:41.156 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:41.156 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:41.156 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:41.156 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:41.156 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:41.156 element at address: 0x20001b400000 with size: 0.561218 MiB 00:04:41.156 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:41.156 element at address: 0x200019600000 with size: 0.488220 MiB 00:04:41.156 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:41.156 element at address: 0x200012c00000 with size: 0.434204 MiB 00:04:41.156 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:41.156 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:41.156 list of standard malloc elements. size: 199.288208 MiB 00:04:41.156 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:41.156 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:41.156 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:41.156 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:41.156 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:41.156 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:41.156 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:41.156 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:41.156 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:41.156 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:41.156 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:41.156 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:41.156 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:41.156 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:41.156 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:41.157 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:41.158 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:41.158 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:41.158 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:41.158 list of memzone associated elements. size: 607.930908 MiB 00:04:41.158 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:41.158 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:41.158 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:41.158 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:41.158 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:41.158 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59902_0 00:04:41.158 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:41.158 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59902_0 00:04:41.158 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:41.158 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59902_0 00:04:41.158 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:41.158 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:41.158 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:41.158 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:41.158 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:41.158 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59902_0 00:04:41.158 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:41.158 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59902 00:04:41.158 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:41.158 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59902 00:04:41.158 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:41.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:41.158 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:41.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:41.158 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:41.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:41.158 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:41.158 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:41.158 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:41.158 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59902 00:04:41.158 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:41.158 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59902 00:04:41.158 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:41.158 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59902 00:04:41.158 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:41.158 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59902 00:04:41.158 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:41.158 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59902 00:04:41.158 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:41.158 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59902 00:04:41.158 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:41.158 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:41.158 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:41.158 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:41.158 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:41.158 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:41.158 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:41.159 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59902 00:04:41.159 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:41.159 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59902 00:04:41.159 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:41.159 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:41.159 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:41.159 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:41.159 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:41.159 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59902 00:04:41.159 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:41.159 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:41.159 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:41.159 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59902 00:04:41.159 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:41.159 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59902 00:04:41.159 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:41.159 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59902 00:04:41.159 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:41.159 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:41.159 22:45:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:41.159 22:45:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59902 00:04:41.159 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59902 ']' 00:04:41.159 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59902 00:04:41.159 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:41.159 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.159 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59902 00:04:41.159 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.159 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.159 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59902' 00:04:41.159 killing process with pid 59902 00:04:41.159 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59902 00:04:41.159 22:45:20 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59902 00:04:42.534 00:04:42.534 real 0m2.420s 00:04:42.534 user 0m2.402s 00:04:42.534 sys 0m0.422s 00:04:42.534 22:45:21 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.534 22:45:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.534 ************************************ 00:04:42.534 END TEST dpdk_mem_utility 00:04:42.534 ************************************ 00:04:42.534 22:45:21 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:42.534 22:45:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.534 22:45:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.534 22:45:21 -- common/autotest_common.sh@10 -- # set +x 00:04:42.534 ************************************ 00:04:42.534 START TEST event 00:04:42.534 ************************************ 00:04:42.534 22:45:21 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:42.534 * Looking for test storage... 00:04:42.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:42.534 22:45:21 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.534 22:45:21 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.534 22:45:21 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.793 22:45:21 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.793 22:45:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.793 22:45:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.793 22:45:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.793 22:45:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.793 22:45:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.793 22:45:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.793 22:45:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.793 22:45:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.793 22:45:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.793 22:45:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.793 22:45:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.793 22:45:21 event -- scripts/common.sh@344 -- # case "$op" in 00:04:42.793 22:45:21 event -- scripts/common.sh@345 -- # : 1 00:04:42.793 22:45:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.793 22:45:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.793 22:45:21 event -- scripts/common.sh@365 -- # decimal 1 00:04:42.793 22:45:21 event -- scripts/common.sh@353 -- # local d=1 00:04:42.793 22:45:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.793 22:45:21 event -- scripts/common.sh@355 -- # echo 1 00:04:42.793 22:45:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.793 22:45:21 event -- scripts/common.sh@366 -- # decimal 2 00:04:42.793 22:45:21 event -- scripts/common.sh@353 -- # local d=2 00:04:42.794 22:45:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.794 22:45:21 event -- scripts/common.sh@355 -- # echo 2 00:04:42.794 22:45:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.794 22:45:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.794 22:45:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.794 22:45:21 event -- scripts/common.sh@368 -- # return 0 00:04:42.794 22:45:21 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.794 22:45:21 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.794 --rc genhtml_branch_coverage=1 00:04:42.794 --rc genhtml_function_coverage=1 00:04:42.794 --rc genhtml_legend=1 00:04:42.794 --rc geninfo_all_blocks=1 00:04:42.794 --rc geninfo_unexecuted_blocks=1 00:04:42.794 00:04:42.794 ' 00:04:42.794 22:45:21 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.794 --rc genhtml_branch_coverage=1 00:04:42.794 --rc genhtml_function_coverage=1 00:04:42.794 --rc genhtml_legend=1 00:04:42.794 --rc geninfo_all_blocks=1 00:04:42.794 --rc geninfo_unexecuted_blocks=1 00:04:42.794 00:04:42.794 ' 00:04:42.794 22:45:21 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.794 --rc genhtml_branch_coverage=1 00:04:42.794 --rc genhtml_function_coverage=1 00:04:42.794 --rc genhtml_legend=1 00:04:42.794 --rc geninfo_all_blocks=1 00:04:42.794 --rc geninfo_unexecuted_blocks=1 00:04:42.794 00:04:42.794 ' 00:04:42.794 22:45:21 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.794 --rc genhtml_branch_coverage=1 00:04:42.794 --rc genhtml_function_coverage=1 00:04:42.794 --rc genhtml_legend=1 00:04:42.794 --rc geninfo_all_blocks=1 00:04:42.794 --rc geninfo_unexecuted_blocks=1 00:04:42.794 00:04:42.794 ' 00:04:42.794 22:45:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:42.794 22:45:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:42.794 22:45:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.794 22:45:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:42.794 22:45:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.794 22:45:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.794 ************************************ 00:04:42.794 START TEST event_perf 00:04:42.794 ************************************ 00:04:42.794 22:45:21 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.794 Running I/O for 1 seconds...[2024-12-13 22:45:21.742156] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:42.794 [2024-12-13 22:45:21.742343] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59988 ] 00:04:42.794 [2024-12-13 22:45:21.894678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.053 [2024-12-13 22:45:21.993732] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.053 Running I/O for 1 seconds...[2024-12-13 22:45:21.994098] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.053 [2024-12-13 22:45:21.994020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.053 [2024-12-13 22:45:21.993835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.430 00:04:44.430 lcore 0: 196324 00:04:44.430 lcore 1: 196324 00:04:44.430 lcore 2: 196326 00:04:44.430 lcore 3: 196321 00:04:44.430 done. 00:04:44.430 00:04:44.430 real 0m1.445s 00:04:44.430 user 0m4.250s 00:04:44.430 sys 0m0.075s 00:04:44.430 ************************************ 00:04:44.430 END TEST event_perf 00:04:44.430 ************************************ 00:04:44.430 22:45:23 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.430 22:45:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.430 22:45:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.430 22:45:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:44.430 22:45:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.430 22:45:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.430 ************************************ 00:04:44.430 START TEST event_reactor 00:04:44.430 ************************************ 00:04:44.430 22:45:23 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.430 [2024-12-13 22:45:23.228096] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:44.430 [2024-12-13 22:45:23.228297] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60033 ] 00:04:44.430 [2024-12-13 22:45:23.387424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.430 [2024-12-13 22:45:23.480243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.807 test_start 00:04:45.807 oneshot 00:04:45.807 tick 100 00:04:45.807 tick 100 00:04:45.807 tick 250 00:04:45.807 tick 100 00:04:45.807 tick 100 00:04:45.807 tick 250 00:04:45.807 tick 100 00:04:45.807 tick 500 00:04:45.807 tick 100 00:04:45.807 tick 100 00:04:45.807 tick 250 00:04:45.807 tick 100 00:04:45.807 tick 100 00:04:45.807 test_end 00:04:45.807 00:04:45.807 real 0m1.414s 00:04:45.807 user 0m1.226s 00:04:45.807 sys 0m0.079s 00:04:45.807 22:45:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.807 ************************************ 00:04:45.807 END TEST event_reactor 00:04:45.807 ************************************ 00:04:45.807 22:45:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:45.807 22:45:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.807 22:45:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:45.807 22:45:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.807 22:45:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.807 ************************************ 00:04:45.807 START TEST event_reactor_perf 00:04:45.807 ************************************ 00:04:45.807 22:45:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.807 [2024-12-13 22:45:24.681854] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:45.807 [2024-12-13 22:45:24.682157] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60064 ] 00:04:45.807 [2024-12-13 22:45:24.838820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.807 [2024-12-13 22:45:24.920279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.209 test_start 00:04:47.209 test_end 00:04:47.209 Performance: 412715 events per second 00:04:47.209 00:04:47.209 real 0m1.390s 00:04:47.209 user 0m1.213s 00:04:47.209 sys 0m0.069s 00:04:47.209 ************************************ 00:04:47.209 END TEST event_reactor_perf 00:04:47.209 ************************************ 00:04:47.209 22:45:26 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.209 22:45:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.209 22:45:26 event -- event/event.sh@49 -- # uname -s 00:04:47.209 22:45:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.209 22:45:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.209 22:45:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.209 22:45:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.209 22:45:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.209 ************************************ 00:04:47.209 START TEST event_scheduler 00:04:47.209 ************************************ 00:04:47.209 22:45:26 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.209 * Looking for test storage... 00:04:47.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:47.209 22:45:26 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.209 22:45:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.209 22:45:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.209 22:45:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:47.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.209 22:45:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:47.209 22:45:26 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.209 22:45:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.209 --rc genhtml_branch_coverage=1 00:04:47.209 --rc genhtml_function_coverage=1 00:04:47.209 --rc genhtml_legend=1 00:04:47.209 --rc geninfo_all_blocks=1 00:04:47.209 --rc geninfo_unexecuted_blocks=1 00:04:47.209 00:04:47.209 ' 00:04:47.209 22:45:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.209 --rc genhtml_branch_coverage=1 00:04:47.209 --rc genhtml_function_coverage=1 00:04:47.209 --rc genhtml_legend=1 00:04:47.209 --rc geninfo_all_blocks=1 00:04:47.209 --rc geninfo_unexecuted_blocks=1 00:04:47.209 00:04:47.209 ' 00:04:47.209 22:45:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.209 --rc genhtml_branch_coverage=1 00:04:47.209 --rc genhtml_function_coverage=1 00:04:47.209 --rc genhtml_legend=1 00:04:47.209 --rc geninfo_all_blocks=1 00:04:47.209 --rc geninfo_unexecuted_blocks=1 00:04:47.209 00:04:47.209 ' 00:04:47.209 22:45:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.209 --rc genhtml_branch_coverage=1 00:04:47.209 --rc genhtml_function_coverage=1 00:04:47.210 --rc genhtml_legend=1 00:04:47.210 --rc geninfo_all_blocks=1 00:04:47.210 --rc geninfo_unexecuted_blocks=1 00:04:47.210 00:04:47.210 ' 00:04:47.210 22:45:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:47.210 22:45:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60140 00:04:47.210 22:45:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.210 22:45:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60140 00:04:47.210 22:45:26 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60140 ']' 00:04:47.210 22:45:26 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.210 22:45:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:47.210 22:45:26 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.210 22:45:26 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.210 22:45:26 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.210 22:45:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.210 [2024-12-13 22:45:26.292354] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:47.210 [2024-12-13 22:45:26.292648] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60140 ] 00:04:47.468 [2024-12-13 22:45:26.453522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.468 [2024-12-13 22:45:26.555252] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.468 [2024-12-13 22:45:26.555545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.468 [2024-12-13 22:45:26.555669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.468 [2024-12-13 22:45:26.555671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.035 22:45:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.035 22:45:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:48.035 22:45:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:48.035 22:45:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.035 22:45:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.035 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.035 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.035 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.035 POWER: Cannot set governor of lcore 0 to performance 00:04:48.035 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.035 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.035 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.035 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.035 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:48.035 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:48.035 POWER: Unable to set Power Management Environment for lcore 0 00:04:48.035 [2024-12-13 22:45:27.149497] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:48.035 [2024-12-13 22:45:27.149535] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:48.035 [2024-12-13 22:45:27.149591] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:48.035 [2024-12-13 22:45:27.149623] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:48.035 [2024-12-13 22:45:27.149670] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:48.035 [2024-12-13 22:45:27.149694] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:48.035 22:45:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.035 22:45:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:48.035 22:45:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.035 22:45:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.296 [2024-12-13 22:45:27.387813] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:48.296 22:45:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.296 22:45:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:48.296 22:45:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.296 22:45:27 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.296 22:45:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.296 ************************************ 00:04:48.296 START TEST scheduler_create_thread 00:04:48.296 ************************************ 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.296 2 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.296 3 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.296 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.556 4 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.556 5 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.556 6 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.556 7 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.556 8 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.556 9 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.556 10 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.556 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.123 ************************************ 00:04:49.123 END TEST scheduler_create_thread 00:04:49.123 ************************************ 00:04:49.123 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.123 00:04:49.123 real 0m0.595s 00:04:49.123 user 0m0.013s 00:04:49.123 sys 0m0.007s 00:04:49.123 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.123 22:45:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.123 22:45:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:49.123 22:45:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60140 00:04:49.123 22:45:28 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60140 ']' 00:04:49.123 22:45:28 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60140 00:04:49.123 22:45:28 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:49.123 22:45:28 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.123 22:45:28 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60140 00:04:49.123 killing process with pid 60140 00:04:49.123 22:45:28 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:49.123 22:45:28 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:49.123 22:45:28 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60140' 00:04:49.123 22:45:28 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60140 00:04:49.123 22:45:28 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60140 00:04:49.381 [2024-12-13 22:45:28.477122] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.948 00:04:49.948 real 0m2.964s 00:04:49.948 user 0m5.662s 00:04:49.948 sys 0m0.322s 00:04:49.948 22:45:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.948 ************************************ 00:04:49.948 END TEST event_scheduler 00:04:49.948 ************************************ 00:04:49.948 22:45:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.948 22:45:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:50.206 22:45:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:50.206 22:45:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.206 22:45:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.206 22:45:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.206 ************************************ 00:04:50.206 START TEST app_repeat 00:04:50.206 ************************************ 00:04:50.206 22:45:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:50.206 Process app_repeat pid: 60218 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60218 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60218' 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.206 spdk_app_start Round 0 00:04:50.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60218 /var/tmp/spdk-nbd.sock 00:04:50.206 22:45:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60218 ']' 00:04:50.206 22:45:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.206 22:45:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.206 22:45:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:50.206 22:45:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.206 22:45:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.206 22:45:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.206 [2024-12-13 22:45:29.131447] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:50.206 [2024-12-13 22:45:29.131627] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60218 ] 00:04:50.206 [2024-12-13 22:45:29.281578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.464 [2024-12-13 22:45:29.376311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.464 [2024-12-13 22:45:29.376443] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.029 22:45:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.029 22:45:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:51.029 22:45:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.287 Malloc0 00:04:51.287 22:45:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.546 Malloc1 00:04:51.546 22:45:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.546 /dev/nbd0 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.546 1+0 records in 00:04:51.546 1+0 records out 00:04:51.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286401 s, 14.3 MB/s 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.546 22:45:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.546 22:45:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.805 /dev/nbd1 00:04:51.805 22:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.805 22:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.805 1+0 records in 00:04:51.805 1+0 records out 00:04:51.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201385 s, 20.3 MB/s 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.805 22:45:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.805 22:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.805 22:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.805 22:45:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.805 22:45:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.805 22:45:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.067 22:45:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.067 { 00:04:52.067 "nbd_device": "/dev/nbd0", 00:04:52.067 "bdev_name": "Malloc0" 00:04:52.067 }, 00:04:52.067 { 00:04:52.067 "nbd_device": "/dev/nbd1", 00:04:52.067 "bdev_name": "Malloc1" 00:04:52.067 } 00:04:52.067 ]' 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.068 { 00:04:52.068 "nbd_device": "/dev/nbd0", 00:04:52.068 "bdev_name": "Malloc0" 00:04:52.068 }, 00:04:52.068 { 00:04:52.068 "nbd_device": "/dev/nbd1", 00:04:52.068 "bdev_name": "Malloc1" 00:04:52.068 } 00:04:52.068 ]' 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.068 /dev/nbd1' 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.068 /dev/nbd1' 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.068 22:45:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.068 256+0 records in 00:04:52.068 256+0 records out 00:04:52.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00715022 s, 147 MB/s 00:04:52.069 22:45:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.069 22:45:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.069 256+0 records in 00:04:52.069 256+0 records out 00:04:52.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204814 s, 51.2 MB/s 00:04:52.069 22:45:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.069 22:45:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.331 256+0 records in 00:04:52.331 256+0 records out 00:04:52.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205451 s, 51.0 MB/s 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.331 22:45:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.590 22:45:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.590 22:45:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.590 22:45:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.590 22:45:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.590 22:45:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.590 22:45:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.590 22:45:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.590 22:45:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.590 22:45:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.590 22:45:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.590 22:45:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.849 22:45:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.849 22:45:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.107 22:45:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.042 [2024-12-13 22:45:32.965849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.042 [2024-12-13 22:45:33.059348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.042 [2024-12-13 22:45:33.059435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.042 [2024-12-13 22:45:33.177437] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.042 [2024-12-13 22:45:33.177494] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.574 22:45:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.574 spdk_app_start Round 1 00:04:56.574 22:45:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:56.574 22:45:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60218 /var/tmp/spdk-nbd.sock 00:04:56.574 22:45:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60218 ']' 00:04:56.574 22:45:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.574 22:45:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.574 22:45:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.574 22:45:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.574 22:45:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.574 22:45:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.574 22:45:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:56.574 22:45:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.574 Malloc0 00:04:56.574 22:45:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.832 Malloc1 00:04:56.832 22:45:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.832 22:45:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.091 /dev/nbd0 00:04:57.091 22:45:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.091 22:45:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.091 1+0 records in 00:04:57.091 1+0 records out 00:04:57.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163378 s, 25.1 MB/s 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.091 22:45:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.091 22:45:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.091 22:45:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.091 22:45:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.350 /dev/nbd1 00:04:57.350 22:45:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.350 22:45:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.350 22:45:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:57.350 22:45:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.350 22:45:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.350 22:45:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.351 22:45:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:57.351 22:45:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.351 22:45:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.351 22:45:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.351 22:45:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.351 1+0 records in 00:04:57.351 1+0 records out 00:04:57.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181123 s, 22.6 MB/s 00:04:57.351 22:45:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.351 22:45:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.351 22:45:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.351 22:45:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.351 22:45:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.351 22:45:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.351 22:45:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.351 22:45:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.351 22:45:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.351 22:45:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.609 { 00:04:57.609 "nbd_device": "/dev/nbd0", 00:04:57.609 "bdev_name": "Malloc0" 00:04:57.609 }, 00:04:57.609 { 00:04:57.609 "nbd_device": "/dev/nbd1", 00:04:57.609 "bdev_name": "Malloc1" 00:04:57.609 } 00:04:57.609 ]' 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.609 { 00:04:57.609 "nbd_device": "/dev/nbd0", 00:04:57.609 "bdev_name": "Malloc0" 00:04:57.609 }, 00:04:57.609 { 00:04:57.609 "nbd_device": "/dev/nbd1", 00:04:57.609 "bdev_name": "Malloc1" 00:04:57.609 } 00:04:57.609 ]' 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.609 /dev/nbd1' 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.609 /dev/nbd1' 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.609 256+0 records in 00:04:57.609 256+0 records out 00:04:57.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00790512 s, 133 MB/s 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.609 256+0 records in 00:04:57.609 256+0 records out 00:04:57.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256877 s, 40.8 MB/s 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.609 256+0 records in 00:04:57.609 256+0 records out 00:04:57.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207203 s, 50.6 MB/s 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.609 22:45:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:57.610 22:45:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.610 22:45:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.610 22:45:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.610 22:45:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.610 22:45:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.610 22:45:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.610 22:45:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.869 22:45:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.869 22:45:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.869 22:45:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.869 22:45:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.869 22:45:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.869 22:45:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.869 22:45:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.869 22:45:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.869 22:45:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.869 22:45:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.140 22:45:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.140 22:45:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.140 22:45:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.140 22:45:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.140 22:45:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.140 22:45:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.140 22:45:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.140 22:45:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.140 22:45:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.140 22:45:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.140 22:45:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.412 22:45:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.412 22:45:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.412 22:45:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.412 22:45:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.412 22:45:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.412 22:45:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.412 22:45:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.413 22:45:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.413 22:45:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.413 22:45:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.413 22:45:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.413 22:45:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.413 22:45:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.671 22:45:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.239 [2024-12-13 22:45:38.272366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.239 [2024-12-13 22:45:38.361136] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.239 [2024-12-13 22:45:38.361205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.497 [2024-12-13 22:45:38.474439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.497 [2024-12-13 22:45:38.474508] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.030 22:45:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.030 spdk_app_start Round 2 00:05:02.030 22:45:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:02.030 22:45:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60218 /var/tmp/spdk-nbd.sock 00:05:02.030 22:45:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60218 ']' 00:05:02.030 22:45:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.030 22:45:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.030 22:45:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.030 22:45:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.030 22:45:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.030 22:45:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.030 22:45:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.030 22:45:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.030 Malloc0 00:05:02.030 22:45:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.288 Malloc1 00:05:02.288 22:45:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.288 22:45:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.288 22:45:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.288 22:45:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.288 22:45:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.288 22:45:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.288 22:45:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.288 22:45:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.288 22:45:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.288 22:45:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.289 22:45:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.289 22:45:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.289 22:45:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:02.289 22:45:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.289 22:45:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.289 22:45:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.547 /dev/nbd0 00:05:02.547 22:45:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.547 22:45:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.547 1+0 records in 00:05:02.547 1+0 records out 00:05:02.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174659 s, 23.5 MB/s 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.547 22:45:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.547 22:45:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.547 22:45:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.547 22:45:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.806 /dev/nbd1 00:05:02.806 22:45:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.806 22:45:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.806 1+0 records in 00:05:02.806 1+0 records out 00:05:02.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196994 s, 20.8 MB/s 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.806 22:45:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.806 22:45:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.806 22:45:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.806 22:45:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.806 22:45:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.806 22:45:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.065 { 00:05:03.065 "nbd_device": "/dev/nbd0", 00:05:03.065 "bdev_name": "Malloc0" 00:05:03.065 }, 00:05:03.065 { 00:05:03.065 "nbd_device": "/dev/nbd1", 00:05:03.065 "bdev_name": "Malloc1" 00:05:03.065 } 00:05:03.065 ]' 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.065 { 00:05:03.065 "nbd_device": "/dev/nbd0", 00:05:03.065 "bdev_name": "Malloc0" 00:05:03.065 }, 00:05:03.065 { 00:05:03.065 "nbd_device": "/dev/nbd1", 00:05:03.065 "bdev_name": "Malloc1" 00:05:03.065 } 00:05:03.065 ]' 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.065 /dev/nbd1' 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.065 /dev/nbd1' 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.065 256+0 records in 00:05:03.065 256+0 records out 00:05:03.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00668389 s, 157 MB/s 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.065 256+0 records in 00:05:03.065 256+0 records out 00:05:03.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152636 s, 68.7 MB/s 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.065 256+0 records in 00:05:03.065 256+0 records out 00:05:03.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175251 s, 59.8 MB/s 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.065 22:45:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.324 22:45:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.324 22:45:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.324 22:45:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.324 22:45:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.324 22:45:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.324 22:45:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.324 22:45:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.324 22:45:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.324 22:45:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.324 22:45:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.583 22:45:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.583 22:45:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.583 22:45:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.583 22:45:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.583 22:45:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.583 22:45:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.583 22:45:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.583 22:45:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.583 22:45:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.583 22:45:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.583 22:45:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.842 22:45:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.842 22:45:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.101 22:45:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.668 [2024-12-13 22:45:43.685648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.668 [2024-12-13 22:45:43.765707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.668 [2024-12-13 22:45:43.765785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.927 [2024-12-13 22:45:43.879522] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.927 [2024-12-13 22:45:43.879591] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.458 22:45:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60218 /var/tmp/spdk-nbd.sock 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60218 ']' 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:07.459 22:45:46 event.app_repeat -- event/event.sh@39 -- # killprocess 60218 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60218 ']' 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60218 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60218 00:05:07.459 killing process with pid 60218 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60218' 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60218 00:05:07.459 22:45:46 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60218 00:05:08.026 spdk_app_start is called in Round 0. 00:05:08.026 Shutdown signal received, stop current app iteration 00:05:08.027 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:08.027 spdk_app_start is called in Round 1. 00:05:08.027 Shutdown signal received, stop current app iteration 00:05:08.027 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:08.027 spdk_app_start is called in Round 2. 00:05:08.027 Shutdown signal received, stop current app iteration 00:05:08.027 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:08.027 spdk_app_start is called in Round 3. 00:05:08.027 Shutdown signal received, stop current app iteration 00:05:08.027 ************************************ 00:05:08.027 END TEST app_repeat 00:05:08.027 ************************************ 00:05:08.027 22:45:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:08.027 22:45:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:08.027 00:05:08.027 real 0m17.790s 00:05:08.027 user 0m38.831s 00:05:08.027 sys 0m2.115s 00:05:08.027 22:45:46 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.027 22:45:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.027 22:45:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:08.027 22:45:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:08.027 22:45:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.027 22:45:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.027 22:45:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.027 ************************************ 00:05:08.027 START TEST cpu_locks 00:05:08.027 ************************************ 00:05:08.027 22:45:46 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:08.027 * Looking for test storage... 00:05:08.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:08.027 22:45:46 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.027 22:45:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.027 22:45:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.027 22:45:47 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.027 22:45:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:08.027 22:45:47 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.027 22:45:47 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.027 --rc genhtml_branch_coverage=1 00:05:08.027 --rc genhtml_function_coverage=1 00:05:08.027 --rc genhtml_legend=1 00:05:08.027 --rc geninfo_all_blocks=1 00:05:08.027 --rc geninfo_unexecuted_blocks=1 00:05:08.027 00:05:08.027 ' 00:05:08.027 22:45:47 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.027 --rc genhtml_branch_coverage=1 00:05:08.027 --rc genhtml_function_coverage=1 00:05:08.027 --rc genhtml_legend=1 00:05:08.027 --rc geninfo_all_blocks=1 00:05:08.027 --rc geninfo_unexecuted_blocks=1 00:05:08.027 00:05:08.027 ' 00:05:08.027 22:45:47 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.027 --rc genhtml_branch_coverage=1 00:05:08.027 --rc genhtml_function_coverage=1 00:05:08.027 --rc genhtml_legend=1 00:05:08.027 --rc geninfo_all_blocks=1 00:05:08.027 --rc geninfo_unexecuted_blocks=1 00:05:08.027 00:05:08.027 ' 00:05:08.027 22:45:47 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.027 --rc genhtml_branch_coverage=1 00:05:08.027 --rc genhtml_function_coverage=1 00:05:08.027 --rc genhtml_legend=1 00:05:08.027 --rc geninfo_all_blocks=1 00:05:08.027 --rc geninfo_unexecuted_blocks=1 00:05:08.027 00:05:08.027 ' 00:05:08.027 22:45:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:08.027 22:45:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:08.027 22:45:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:08.027 22:45:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:08.027 22:45:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.027 22:45:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.027 22:45:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.027 ************************************ 00:05:08.027 START TEST default_locks 00:05:08.027 ************************************ 00:05:08.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.027 22:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:08.027 22:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60650 00:05:08.027 22:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60650 00:05:08.027 22:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60650 ']' 00:05:08.027 22:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.027 22:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.027 22:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.027 22:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.027 22:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.027 22:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.027 [2024-12-13 22:45:47.154591] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:08.027 [2024-12-13 22:45:47.154719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60650 ] 00:05:08.287 [2024-12-13 22:45:47.311324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.287 [2024-12-13 22:45:47.400945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.901 22:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.901 22:45:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:08.901 22:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60650 00:05:08.901 22:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60650 00:05:08.901 22:45:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.161 22:45:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60650 00:05:09.161 22:45:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60650 ']' 00:05:09.161 22:45:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60650 00:05:09.161 22:45:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:09.161 22:45:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.161 22:45:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60650 00:05:09.161 killing process with pid 60650 00:05:09.161 22:45:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.161 22:45:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.161 22:45:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60650' 00:05:09.161 22:45:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60650 00:05:09.161 22:45:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60650 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60650 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60650 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:10.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60650 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60650 ']' 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.548 ERROR: process (pid: 60650) is no longer running 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.548 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60650) - No such process 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:10.548 22:45:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.549 22:45:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.549 22:45:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.549 00:05:10.549 real 0m2.404s 00:05:10.549 user 0m2.369s 00:05:10.549 sys 0m0.483s 00:05:10.549 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.549 ************************************ 00:05:10.549 END TEST default_locks 00:05:10.549 ************************************ 00:05:10.549 22:45:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.549 22:45:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:10.549 22:45:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.549 22:45:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.549 22:45:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.549 ************************************ 00:05:10.549 START TEST default_locks_via_rpc 00:05:10.549 ************************************ 00:05:10.549 22:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:10.549 22:45:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60703 00:05:10.549 22:45:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60703 00:05:10.549 22:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60703 ']' 00:05:10.549 22:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.549 22:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.549 22:45:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.549 22:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.549 22:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.549 22:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.549 [2024-12-13 22:45:49.630662] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:10.549 [2024-12-13 22:45:49.630810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60703 ] 00:05:10.807 [2024-12-13 22:45:49.792082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.807 [2024-12-13 22:45:49.899286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60703 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60703 00:05:11.378 22:45:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.640 22:45:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60703 00:05:11.640 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60703 ']' 00:05:11.640 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60703 00:05:11.640 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:11.640 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.640 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60703 00:05:11.640 killing process with pid 60703 00:05:11.640 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.640 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.640 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60703' 00:05:11.640 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60703 00:05:11.640 22:45:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60703 00:05:13.029 00:05:13.029 real 0m2.408s 00:05:13.029 user 0m2.362s 00:05:13.029 sys 0m0.486s 00:05:13.029 22:45:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.029 22:45:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.029 ************************************ 00:05:13.029 END TEST default_locks_via_rpc 00:05:13.029 ************************************ 00:05:13.029 22:45:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:13.029 22:45:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.029 22:45:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.029 22:45:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.029 ************************************ 00:05:13.029 START TEST non_locking_app_on_locked_coremask 00:05:13.029 ************************************ 00:05:13.029 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:13.029 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60766 00:05:13.029 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.029 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60766 /var/tmp/spdk.sock 00:05:13.029 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60766 ']' 00:05:13.029 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.029 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.029 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.029 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.029 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.029 [2024-12-13 22:45:52.100475] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:13.029 [2024-12-13 22:45:52.101131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60766 ] 00:05:13.291 [2024-12-13 22:45:52.257177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.291 [2024-12-13 22:45:52.371267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.866 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.866 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.866 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60782 00:05:13.866 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60782 /var/tmp/spdk2.sock 00:05:13.866 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60782 ']' 00:05:13.866 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.866 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.866 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:13.866 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.866 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.866 22:45:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.127 [2024-12-13 22:45:53.029105] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:14.127 [2024-12-13 22:45:53.029224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60782 ] 00:05:14.127 [2024-12-13 22:45:53.192812] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.127 [2024-12-13 22:45:53.192867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.388 [2024-12-13 22:45:53.399881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.331 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.331 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.331 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60766 00:05:15.331 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60766 00:05:15.331 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.590 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60766 00:05:15.590 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60766 ']' 00:05:15.590 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60766 00:05:15.590 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:15.590 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.590 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60766 00:05:15.851 killing process with pid 60766 00:05:15.851 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.851 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.851 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60766' 00:05:15.851 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60766 00:05:15.851 22:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60766 00:05:18.393 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60782 00:05:18.393 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60782 ']' 00:05:18.393 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60782 00:05:18.393 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:18.393 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.393 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60782 00:05:18.393 killing process with pid 60782 00:05:18.393 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.393 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.393 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60782' 00:05:18.393 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60782 00:05:18.393 22:45:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60782 00:05:19.777 ************************************ 00:05:19.777 END TEST non_locking_app_on_locked_coremask 00:05:19.777 ************************************ 00:05:19.777 00:05:19.777 real 0m6.596s 00:05:19.777 user 0m6.774s 00:05:19.777 sys 0m0.912s 00:05:19.777 22:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.777 22:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.777 22:45:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:19.777 22:45:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.777 22:45:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.777 22:45:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.777 ************************************ 00:05:19.777 START TEST locking_app_on_unlocked_coremask 00:05:19.777 ************************************ 00:05:19.777 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:19.777 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60873 00:05:19.777 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60873 /var/tmp/spdk.sock 00:05:19.777 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:19.777 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60873 ']' 00:05:19.777 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.777 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.777 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.777 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.777 22:45:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.777 [2024-12-13 22:45:58.768533] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:19.777 [2024-12-13 22:45:58.768976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60873 ] 00:05:20.035 [2024-12-13 22:45:58.949009] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.035 [2024-12-13 22:45:58.949071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.035 [2024-12-13 22:45:59.066606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.602 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.602 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:20.602 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60889 00:05:20.602 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60889 /var/tmp/spdk2.sock 00:05:20.602 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60889 ']' 00:05:20.602 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.602 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.602 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:20.602 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.602 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.602 22:45:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.861 [2024-12-13 22:45:59.801867] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:20.861 [2024-12-13 22:45:59.802200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60889 ] 00:05:20.861 [2024-12-13 22:45:59.975587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.119 [2024-12-13 22:46:00.212882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.493 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.493 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:22.493 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60889 00:05:22.493 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60889 00:05:22.493 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.751 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60873 00:05:22.751 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60873 ']' 00:05:22.751 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60873 00:05:22.751 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.751 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.751 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60873 00:05:23.009 killing process with pid 60873 00:05:23.009 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.009 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.009 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60873' 00:05:23.009 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60873 00:05:23.009 22:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60873 00:05:25.537 22:46:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60889 00:05:25.537 22:46:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60889 ']' 00:05:25.537 22:46:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60889 00:05:25.537 22:46:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.537 22:46:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.537 22:46:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60889 00:05:25.796 killing process with pid 60889 00:05:25.796 22:46:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.796 22:46:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.796 22:46:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60889' 00:05:25.796 22:46:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60889 00:05:25.796 22:46:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60889 00:05:26.731 ************************************ 00:05:26.731 END TEST locking_app_on_unlocked_coremask 00:05:26.731 ************************************ 00:05:26.731 00:05:26.731 real 0m7.170s 00:05:26.731 user 0m7.279s 00:05:26.731 sys 0m1.000s 00:05:26.731 22:46:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.731 22:46:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.989 22:46:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:26.989 22:46:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.989 22:46:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.989 22:46:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.989 ************************************ 00:05:26.989 START TEST locking_app_on_locked_coremask 00:05:26.989 ************************************ 00:05:26.989 22:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:26.989 22:46:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60991 00:05:26.989 22:46:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60991 /var/tmp/spdk.sock 00:05:26.989 22:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60991 ']' 00:05:26.989 22:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.989 22:46:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.989 22:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.989 22:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.989 22:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.989 22:46:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.989 [2024-12-13 22:46:05.965906] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:26.989 [2024-12-13 22:46:05.966031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60991 ] 00:05:26.989 [2024-12-13 22:46:06.122700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.266 [2024-12-13 22:46:06.218658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61007 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61007 /var/tmp/spdk2.sock 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61007 /var/tmp/spdk2.sock 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:27.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61007 /var/tmp/spdk2.sock 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61007 ']' 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.841 22:46:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.841 [2024-12-13 22:46:06.876822] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:27.841 [2024-12-13 22:46:06.877121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61007 ] 00:05:28.099 [2024-12-13 22:46:07.050240] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60991 has claimed it. 00:05:28.099 [2024-12-13 22:46:07.050296] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:28.665 ERROR: process (pid: 61007) is no longer running 00:05:28.665 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61007) - No such process 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60991 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60991 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60991 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60991 ']' 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60991 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60991 00:05:28.665 killing process with pid 60991 00:05:28.665 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.666 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.666 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60991' 00:05:28.666 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60991 00:05:28.666 22:46:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60991 00:05:30.569 00:05:30.569 real 0m3.363s 00:05:30.569 user 0m3.562s 00:05:30.569 sys 0m0.552s 00:05:30.569 22:46:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.569 22:46:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.569 ************************************ 00:05:30.569 END TEST locking_app_on_locked_coremask 00:05:30.569 ************************************ 00:05:30.569 22:46:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:30.569 22:46:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.569 22:46:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.569 22:46:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.569 ************************************ 00:05:30.569 START TEST locking_overlapped_coremask 00:05:30.569 ************************************ 00:05:30.569 22:46:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:30.569 22:46:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61066 00:05:30.569 22:46:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61066 /var/tmp/spdk.sock 00:05:30.569 22:46:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61066 ']' 00:05:30.569 22:46:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.569 22:46:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:30.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.569 22:46:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.569 22:46:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.569 22:46:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.569 22:46:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.569 [2024-12-13 22:46:09.366611] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:30.569 [2024-12-13 22:46:09.366724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61066 ] 00:05:30.569 [2024-12-13 22:46:09.522417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.569 [2024-12-13 22:46:09.622134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.569 [2024-12-13 22:46:09.622228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.569 [2024-12-13 22:46:09.622327] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61084 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61084 /var/tmp/spdk2.sock 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61084 /var/tmp/spdk2.sock 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61084 /var/tmp/spdk2.sock 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61084 ']' 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.135 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.393 [2024-12-13 22:46:10.292804] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:31.393 [2024-12-13 22:46:10.293102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61084 ] 00:05:31.393 [2024-12-13 22:46:10.472301] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61066 has claimed it. 00:05:31.393 [2024-12-13 22:46:10.472358] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:31.959 ERROR: process (pid: 61084) is no longer running 00:05:31.959 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61084) - No such process 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61066 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61066 ']' 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61066 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61066 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.959 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61066' 00:05:31.959 killing process with pid 61066 00:05:31.960 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61066 00:05:31.960 22:46:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61066 00:05:33.330 00:05:33.330 real 0m3.149s 00:05:33.330 user 0m8.599s 00:05:33.330 sys 0m0.442s 00:05:33.330 22:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.330 22:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.330 ************************************ 00:05:33.330 END TEST locking_overlapped_coremask 00:05:33.330 ************************************ 00:05:33.589 22:46:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:33.589 22:46:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.589 22:46:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.589 22:46:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.589 ************************************ 00:05:33.589 START TEST locking_overlapped_coremask_via_rpc 00:05:33.589 ************************************ 00:05:33.589 22:46:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:33.589 22:46:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61141 00:05:33.589 22:46:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61141 /var/tmp/spdk.sock 00:05:33.589 22:46:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61141 ']' 00:05:33.589 22:46:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.589 22:46:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.589 22:46:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.589 22:46:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.589 22:46:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.589 22:46:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:33.589 [2024-12-13 22:46:12.545772] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:33.589 [2024-12-13 22:46:12.545970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61141 ] 00:05:33.589 [2024-12-13 22:46:12.696908] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.589 [2024-12-13 22:46:12.697078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.846 [2024-12-13 22:46:12.793575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.846 [2024-12-13 22:46:12.793870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.846 [2024-12-13 22:46:12.793884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.412 22:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.412 22:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.412 22:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:34.412 22:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61155 00:05:34.412 22:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61155 /var/tmp/spdk2.sock 00:05:34.412 22:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61155 ']' 00:05:34.412 22:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.412 22:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.412 22:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.412 22:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.412 22:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.412 [2024-12-13 22:46:13.483999] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:34.412 [2024-12-13 22:46:13.484410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61155 ] 00:05:34.670 [2024-12-13 22:46:13.674933] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.670 [2024-12-13 22:46:13.674990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.928 [2024-12-13 22:46:13.883602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.928 [2024-12-13 22:46:13.886842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.928 [2024-12-13 22:46:13.886879] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:36.299 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.299 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.300 [2024-12-13 22:46:15.071905] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61141 has claimed it. 00:05:36.300 request: 00:05:36.300 { 00:05:36.300 "method": "framework_enable_cpumask_locks", 00:05:36.300 "req_id": 1 00:05:36.300 } 00:05:36.300 Got JSON-RPC error response 00:05:36.300 response: 00:05:36.300 { 00:05:36.300 "code": -32603, 00:05:36.300 "message": "Failed to claim CPU core: 2" 00:05:36.300 } 00:05:36.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61141 /var/tmp/spdk.sock 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61141 ']' 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61155 /var/tmp/spdk2.sock 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61155 ']' 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.300 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.558 ************************************ 00:05:36.558 END TEST locking_overlapped_coremask_via_rpc 00:05:36.558 ************************************ 00:05:36.558 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.558 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.558 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:36.558 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.558 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.558 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.558 00:05:36.558 real 0m3.017s 00:05:36.558 user 0m1.098s 00:05:36.558 sys 0m0.138s 00:05:36.558 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.558 22:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.558 22:46:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:36.558 22:46:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61141 ]] 00:05:36.558 22:46:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61141 00:05:36.558 22:46:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61141 ']' 00:05:36.558 22:46:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61141 00:05:36.558 22:46:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:36.558 22:46:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.558 22:46:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61141 00:05:36.558 killing process with pid 61141 00:05:36.558 22:46:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.558 22:46:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.558 22:46:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61141' 00:05:36.558 22:46:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61141 00:05:36.558 22:46:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61141 00:05:37.931 22:46:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61155 ]] 00:05:37.931 22:46:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61155 00:05:37.931 22:46:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61155 ']' 00:05:37.931 22:46:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61155 00:05:37.931 22:46:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:37.931 22:46:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.931 22:46:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61155 00:05:37.931 killing process with pid 61155 00:05:37.931 22:46:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:37.931 22:46:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:37.931 22:46:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61155' 00:05:37.931 22:46:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61155 00:05:37.931 22:46:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61155 00:05:39.321 22:46:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:39.321 22:46:18 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:39.321 22:46:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61141 ]] 00:05:39.321 22:46:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61141 00:05:39.321 22:46:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61141 ']' 00:05:39.321 22:46:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61141 00:05:39.321 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61141) - No such process 00:05:39.321 Process with pid 61141 is not found 00:05:39.321 22:46:18 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61141 is not found' 00:05:39.321 22:46:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61155 ]] 00:05:39.321 22:46:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61155 00:05:39.321 22:46:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61155 ']' 00:05:39.321 Process with pid 61155 is not found 00:05:39.321 22:46:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61155 00:05:39.321 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61155) - No such process 00:05:39.321 22:46:18 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61155 is not found' 00:05:39.321 22:46:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:39.321 00:05:39.321 real 0m31.230s 00:05:39.321 user 0m53.600s 00:05:39.321 sys 0m4.880s 00:05:39.321 22:46:18 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.321 ************************************ 00:05:39.321 END TEST cpu_locks 00:05:39.321 ************************************ 00:05:39.321 22:46:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.321 ************************************ 00:05:39.321 END TEST event 00:05:39.321 ************************************ 00:05:39.321 00:05:39.321 real 0m56.620s 00:05:39.321 user 1m44.946s 00:05:39.321 sys 0m7.756s 00:05:39.321 22:46:18 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.321 22:46:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.321 22:46:18 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:39.321 22:46:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.321 22:46:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.321 22:46:18 -- common/autotest_common.sh@10 -- # set +x 00:05:39.321 ************************************ 00:05:39.321 START TEST thread 00:05:39.321 ************************************ 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:39.321 * Looking for test storage... 00:05:39.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.321 22:46:18 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.321 22:46:18 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.321 22:46:18 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.321 22:46:18 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.321 22:46:18 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.321 22:46:18 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.321 22:46:18 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.321 22:46:18 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.321 22:46:18 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.321 22:46:18 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.321 22:46:18 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.321 22:46:18 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:39.321 22:46:18 thread -- scripts/common.sh@345 -- # : 1 00:05:39.321 22:46:18 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.321 22:46:18 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.321 22:46:18 thread -- scripts/common.sh@365 -- # decimal 1 00:05:39.321 22:46:18 thread -- scripts/common.sh@353 -- # local d=1 00:05:39.321 22:46:18 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.321 22:46:18 thread -- scripts/common.sh@355 -- # echo 1 00:05:39.321 22:46:18 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.321 22:46:18 thread -- scripts/common.sh@366 -- # decimal 2 00:05:39.321 22:46:18 thread -- scripts/common.sh@353 -- # local d=2 00:05:39.321 22:46:18 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.321 22:46:18 thread -- scripts/common.sh@355 -- # echo 2 00:05:39.321 22:46:18 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.321 22:46:18 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.321 22:46:18 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.321 22:46:18 thread -- scripts/common.sh@368 -- # return 0 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.321 --rc genhtml_branch_coverage=1 00:05:39.321 --rc genhtml_function_coverage=1 00:05:39.321 --rc genhtml_legend=1 00:05:39.321 --rc geninfo_all_blocks=1 00:05:39.321 --rc geninfo_unexecuted_blocks=1 00:05:39.321 00:05:39.321 ' 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.321 --rc genhtml_branch_coverage=1 00:05:39.321 --rc genhtml_function_coverage=1 00:05:39.321 --rc genhtml_legend=1 00:05:39.321 --rc geninfo_all_blocks=1 00:05:39.321 --rc geninfo_unexecuted_blocks=1 00:05:39.321 00:05:39.321 ' 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.321 --rc genhtml_branch_coverage=1 00:05:39.321 --rc genhtml_function_coverage=1 00:05:39.321 --rc genhtml_legend=1 00:05:39.321 --rc geninfo_all_blocks=1 00:05:39.321 --rc geninfo_unexecuted_blocks=1 00:05:39.321 00:05:39.321 ' 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.321 --rc genhtml_branch_coverage=1 00:05:39.321 --rc genhtml_function_coverage=1 00:05:39.321 --rc genhtml_legend=1 00:05:39.321 --rc geninfo_all_blocks=1 00:05:39.321 --rc geninfo_unexecuted_blocks=1 00:05:39.321 00:05:39.321 ' 00:05:39.321 22:46:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.321 22:46:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.321 ************************************ 00:05:39.321 START TEST thread_poller_perf 00:05:39.321 ************************************ 00:05:39.321 22:46:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:39.321 [2024-12-13 22:46:18.426693] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:39.321 [2024-12-13 22:46:18.426839] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61315 ] 00:05:39.579 [2024-12-13 22:46:18.586055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.579 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:39.579 [2024-12-13 22:46:18.692074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.952 [2024-12-13T22:46:20.092Z] ====================================== 00:05:40.952 [2024-12-13T22:46:20.092Z] busy:2613172592 (cyc) 00:05:40.952 [2024-12-13T22:46:20.092Z] total_run_count: 307000 00:05:40.952 [2024-12-13T22:46:20.092Z] tsc_hz: 2600000000 (cyc) 00:05:40.952 [2024-12-13T22:46:20.092Z] ====================================== 00:05:40.952 [2024-12-13T22:46:20.092Z] poller_cost: 8511 (cyc), 3273 (nsec) 00:05:40.952 00:05:40.952 real 0m1.465s 00:05:40.952 user 0m1.284s 00:05:40.952 sys 0m0.075s 00:05:40.952 22:46:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.952 22:46:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.952 ************************************ 00:05:40.952 END TEST thread_poller_perf 00:05:40.952 ************************************ 00:05:40.952 22:46:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:40.952 22:46:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:40.952 22:46:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.952 22:46:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.952 ************************************ 00:05:40.952 START TEST thread_poller_perf 00:05:40.952 ************************************ 00:05:40.952 22:46:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:40.952 [2024-12-13 22:46:19.927706] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:40.952 [2024-12-13 22:46:19.927804] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61351 ] 00:05:40.952 [2024-12-13 22:46:20.080230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.210 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:41.210 [2024-12-13 22:46:20.191860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.586 [2024-12-13T22:46:21.726Z] ====================================== 00:05:42.586 [2024-12-13T22:46:21.726Z] busy:2603075304 (cyc) 00:05:42.586 [2024-12-13T22:46:21.726Z] total_run_count: 3647000 00:05:42.586 [2024-12-13T22:46:21.726Z] tsc_hz: 2600000000 (cyc) 00:05:42.586 [2024-12-13T22:46:21.726Z] ====================================== 00:05:42.586 [2024-12-13T22:46:21.726Z] poller_cost: 713 (cyc), 274 (nsec) 00:05:42.586 00:05:42.586 real 0m1.454s 00:05:42.586 user 0m1.279s 00:05:42.586 sys 0m0.067s 00:05:42.586 22:46:21 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.586 22:46:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.586 ************************************ 00:05:42.586 END TEST thread_poller_perf 00:05:42.586 ************************************ 00:05:42.586 22:46:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:42.586 00:05:42.586 real 0m3.166s 00:05:42.586 user 0m2.684s 00:05:42.586 sys 0m0.272s 00:05:42.586 22:46:21 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.586 22:46:21 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.586 ************************************ 00:05:42.586 END TEST thread 00:05:42.586 ************************************ 00:05:42.586 22:46:21 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:42.586 22:46:21 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:42.586 22:46:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.586 22:46:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.586 22:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:42.586 ************************************ 00:05:42.586 START TEST app_cmdline 00:05:42.586 ************************************ 00:05:42.586 22:46:21 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:42.586 * Looking for test storage... 00:05:42.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:42.586 22:46:21 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.586 22:46:21 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.586 22:46:21 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.586 22:46:21 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.586 22:46:21 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:42.586 22:46:21 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.586 22:46:21 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.586 --rc genhtml_branch_coverage=1 00:05:42.586 --rc genhtml_function_coverage=1 00:05:42.586 --rc genhtml_legend=1 00:05:42.586 --rc geninfo_all_blocks=1 00:05:42.586 --rc geninfo_unexecuted_blocks=1 00:05:42.586 00:05:42.586 ' 00:05:42.586 22:46:21 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.586 --rc genhtml_branch_coverage=1 00:05:42.586 --rc genhtml_function_coverage=1 00:05:42.586 --rc genhtml_legend=1 00:05:42.586 --rc geninfo_all_blocks=1 00:05:42.586 --rc geninfo_unexecuted_blocks=1 00:05:42.586 00:05:42.586 ' 00:05:42.586 22:46:21 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.586 --rc genhtml_branch_coverage=1 00:05:42.586 --rc genhtml_function_coverage=1 00:05:42.586 --rc genhtml_legend=1 00:05:42.586 --rc geninfo_all_blocks=1 00:05:42.586 --rc geninfo_unexecuted_blocks=1 00:05:42.586 00:05:42.586 ' 00:05:42.586 22:46:21 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.586 --rc genhtml_branch_coverage=1 00:05:42.586 --rc genhtml_function_coverage=1 00:05:42.586 --rc genhtml_legend=1 00:05:42.586 --rc geninfo_all_blocks=1 00:05:42.586 --rc geninfo_unexecuted_blocks=1 00:05:42.586 00:05:42.586 ' 00:05:42.586 22:46:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:42.586 22:46:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61435 00:05:42.587 22:46:21 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:42.587 22:46:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61435 00:05:42.587 22:46:21 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61435 ']' 00:05:42.587 22:46:21 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.587 22:46:21 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.587 22:46:21 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.587 22:46:21 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.587 22:46:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:42.587 [2024-12-13 22:46:21.622354] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:42.587 [2024-12-13 22:46:21.622452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61435 ] 00:05:42.866 [2024-12-13 22:46:21.772707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.866 [2024-12-13 22:46:21.864507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.433 22:46:22 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.433 22:46:22 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:43.433 22:46:22 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:43.690 { 00:05:43.690 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:05:43.690 "fields": { 00:05:43.690 "major": 25, 00:05:43.690 "minor": 1, 00:05:43.690 "patch": 0, 00:05:43.690 "suffix": "-pre", 00:05:43.690 "commit": "e01cb43b8" 00:05:43.690 } 00:05:43.690 } 00:05:43.690 22:46:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:43.690 22:46:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:43.690 22:46:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:43.690 22:46:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:43.690 22:46:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:43.690 22:46:22 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.690 22:46:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:43.690 22:46:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:43.690 22:46:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:43.690 22:46:22 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.690 22:46:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:43.690 22:46:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:43.690 22:46:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.690 22:46:22 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:43.691 22:46:22 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.691 22:46:22 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:43.691 22:46:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.691 22:46:22 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:43.691 22:46:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.691 22:46:22 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:43.691 22:46:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.691 22:46:22 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:43.691 22:46:22 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:43.691 22:46:22 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.949 request: 00:05:43.949 { 00:05:43.949 "method": "env_dpdk_get_mem_stats", 00:05:43.949 "req_id": 1 00:05:43.949 } 00:05:43.949 Got JSON-RPC error response 00:05:43.949 response: 00:05:43.949 { 00:05:43.949 "code": -32601, 00:05:43.949 "message": "Method not found" 00:05:43.949 } 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.949 22:46:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61435 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61435 ']' 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61435 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61435 00:05:43.949 killing process with pid 61435 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61435' 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@973 -- # kill 61435 00:05:43.949 22:46:22 app_cmdline -- common/autotest_common.sh@978 -- # wait 61435 00:05:45.323 00:05:45.323 real 0m2.881s 00:05:45.323 user 0m3.130s 00:05:45.323 sys 0m0.444s 00:05:45.323 22:46:24 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.323 22:46:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:45.323 ************************************ 00:05:45.323 END TEST app_cmdline 00:05:45.323 ************************************ 00:05:45.323 22:46:24 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:45.323 22:46:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.323 22:46:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.323 22:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:45.323 ************************************ 00:05:45.323 START TEST version 00:05:45.323 ************************************ 00:05:45.323 22:46:24 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:45.323 * Looking for test storage... 00:05:45.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:45.323 22:46:24 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.323 22:46:24 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.323 22:46:24 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.582 22:46:24 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.582 22:46:24 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.582 22:46:24 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.582 22:46:24 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.582 22:46:24 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.582 22:46:24 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.582 22:46:24 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.582 22:46:24 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.582 22:46:24 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.582 22:46:24 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.582 22:46:24 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.582 22:46:24 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.582 22:46:24 version -- scripts/common.sh@344 -- # case "$op" in 00:05:45.582 22:46:24 version -- scripts/common.sh@345 -- # : 1 00:05:45.582 22:46:24 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.582 22:46:24 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.582 22:46:24 version -- scripts/common.sh@365 -- # decimal 1 00:05:45.582 22:46:24 version -- scripts/common.sh@353 -- # local d=1 00:05:45.582 22:46:24 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.582 22:46:24 version -- scripts/common.sh@355 -- # echo 1 00:05:45.582 22:46:24 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.582 22:46:24 version -- scripts/common.sh@366 -- # decimal 2 00:05:45.582 22:46:24 version -- scripts/common.sh@353 -- # local d=2 00:05:45.582 22:46:24 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.582 22:46:24 version -- scripts/common.sh@355 -- # echo 2 00:05:45.582 22:46:24 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.582 22:46:24 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.582 22:46:24 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.582 22:46:24 version -- scripts/common.sh@368 -- # return 0 00:05:45.582 22:46:24 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.582 22:46:24 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.582 --rc genhtml_branch_coverage=1 00:05:45.582 --rc genhtml_function_coverage=1 00:05:45.583 --rc genhtml_legend=1 00:05:45.583 --rc geninfo_all_blocks=1 00:05:45.583 --rc geninfo_unexecuted_blocks=1 00:05:45.583 00:05:45.583 ' 00:05:45.583 22:46:24 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.583 --rc genhtml_branch_coverage=1 00:05:45.583 --rc genhtml_function_coverage=1 00:05:45.583 --rc genhtml_legend=1 00:05:45.583 --rc geninfo_all_blocks=1 00:05:45.583 --rc geninfo_unexecuted_blocks=1 00:05:45.583 00:05:45.583 ' 00:05:45.583 22:46:24 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.583 --rc genhtml_branch_coverage=1 00:05:45.583 --rc genhtml_function_coverage=1 00:05:45.583 --rc genhtml_legend=1 00:05:45.583 --rc geninfo_all_blocks=1 00:05:45.583 --rc geninfo_unexecuted_blocks=1 00:05:45.583 00:05:45.583 ' 00:05:45.583 22:46:24 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.583 --rc genhtml_branch_coverage=1 00:05:45.583 --rc genhtml_function_coverage=1 00:05:45.583 --rc genhtml_legend=1 00:05:45.583 --rc geninfo_all_blocks=1 00:05:45.583 --rc geninfo_unexecuted_blocks=1 00:05:45.583 00:05:45.583 ' 00:05:45.583 22:46:24 version -- app/version.sh@17 -- # get_header_version major 00:05:45.583 22:46:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.583 22:46:24 version -- app/version.sh@14 -- # cut -f2 00:05:45.583 22:46:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.583 22:46:24 version -- app/version.sh@17 -- # major=25 00:05:45.583 22:46:24 version -- app/version.sh@18 -- # get_header_version minor 00:05:45.583 22:46:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.583 22:46:24 version -- app/version.sh@14 -- # cut -f2 00:05:45.583 22:46:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.583 22:46:24 version -- app/version.sh@18 -- # minor=1 00:05:45.583 22:46:24 version -- app/version.sh@19 -- # get_header_version patch 00:05:45.583 22:46:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.583 22:46:24 version -- app/version.sh@14 -- # cut -f2 00:05:45.583 22:46:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.583 22:46:24 version -- app/version.sh@19 -- # patch=0 00:05:45.583 22:46:24 version -- app/version.sh@20 -- # get_header_version suffix 00:05:45.583 22:46:24 version -- app/version.sh@14 -- # cut -f2 00:05:45.583 22:46:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.583 22:46:24 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.583 22:46:24 version -- app/version.sh@20 -- # suffix=-pre 00:05:45.583 22:46:24 version -- app/version.sh@22 -- # version=25.1 00:05:45.583 22:46:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:45.583 22:46:24 version -- app/version.sh@28 -- # version=25.1rc0 00:05:45.583 22:46:24 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:45.583 22:46:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:45.583 22:46:24 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:45.583 22:46:24 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:45.583 00:05:45.583 real 0m0.189s 00:05:45.583 user 0m0.116s 00:05:45.583 sys 0m0.102s 00:05:45.583 22:46:24 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.583 22:46:24 version -- common/autotest_common.sh@10 -- # set +x 00:05:45.583 ************************************ 00:05:45.583 END TEST version 00:05:45.583 ************************************ 00:05:45.583 22:46:24 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:45.583 22:46:24 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:45.583 22:46:24 -- spdk/autotest.sh@194 -- # uname -s 00:05:45.583 22:46:24 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:45.583 22:46:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:45.583 22:46:24 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:45.583 22:46:24 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:45.583 22:46:24 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:45.583 22:46:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.583 22:46:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.583 22:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:45.583 ************************************ 00:05:45.583 START TEST blockdev_nvme 00:05:45.583 ************************************ 00:05:45.583 22:46:24 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:45.583 * Looking for test storage... 00:05:45.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:45.583 22:46:24 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.583 22:46:24 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.583 22:46:24 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.583 22:46:24 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.583 22:46:24 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:45.583 22:46:24 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.583 22:46:24 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.583 --rc genhtml_branch_coverage=1 00:05:45.583 --rc genhtml_function_coverage=1 00:05:45.583 --rc genhtml_legend=1 00:05:45.583 --rc geninfo_all_blocks=1 00:05:45.583 --rc geninfo_unexecuted_blocks=1 00:05:45.583 00:05:45.583 ' 00:05:45.583 22:46:24 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.583 --rc genhtml_branch_coverage=1 00:05:45.583 --rc genhtml_function_coverage=1 00:05:45.583 --rc genhtml_legend=1 00:05:45.583 --rc geninfo_all_blocks=1 00:05:45.583 --rc geninfo_unexecuted_blocks=1 00:05:45.583 00:05:45.583 ' 00:05:45.583 22:46:24 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.583 --rc genhtml_branch_coverage=1 00:05:45.583 --rc genhtml_function_coverage=1 00:05:45.583 --rc genhtml_legend=1 00:05:45.583 --rc geninfo_all_blocks=1 00:05:45.583 --rc geninfo_unexecuted_blocks=1 00:05:45.583 00:05:45.583 ' 00:05:45.583 22:46:24 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.583 --rc genhtml_branch_coverage=1 00:05:45.583 --rc genhtml_function_coverage=1 00:05:45.583 --rc genhtml_legend=1 00:05:45.583 --rc geninfo_all_blocks=1 00:05:45.583 --rc geninfo_unexecuted_blocks=1 00:05:45.583 00:05:45.583 ' 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:45.583 22:46:24 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:45.583 22:46:24 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61607 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61607 00:05:45.841 22:46:24 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61607 ']' 00:05:45.841 22:46:24 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.841 22:46:24 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.841 22:46:24 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:45.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.841 22:46:24 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.841 22:46:24 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.841 22:46:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:45.841 [2024-12-13 22:46:24.795678] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:45.841 [2024-12-13 22:46:24.795815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61607 ] 00:05:45.841 [2024-12-13 22:46:24.955173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.099 [2024-12-13 22:46:25.054556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.665 22:46:25 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.665 22:46:25 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:46.665 22:46:25 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:46.665 22:46:25 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:46.665 22:46:25 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:46.665 22:46:25 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:46.665 22:46:25 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:46.665 22:46:25 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:46.665 22:46:25 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.665 22:46:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.924 22:46:25 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.924 22:46:25 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:46.924 22:46:25 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.924 22:46:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.924 22:46:26 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:46.924 22:46:26 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.924 22:46:26 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.924 22:46:26 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.924 22:46:26 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:46.924 22:46:26 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.924 22:46:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.924 22:46:26 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:47.183 22:46:26 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.183 22:46:26 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:47.183 22:46:26 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:47.184 22:46:26 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4efc0ed6-2847-4c8b-96a8-e43c0fea8e7d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4efc0ed6-2847-4c8b-96a8-e43c0fea8e7d",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d810063e-c3f4-4dcb-af43-c78f13afe579"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d810063e-c3f4-4dcb-af43-c78f13afe579",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a91b841a-37d6-4086-a7b9-2d603549d075"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a91b841a-37d6-4086-a7b9-2d603549d075",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4576e89b-6d03-4b4d-a0d6-0353ecd1a15d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4576e89b-6d03-4b4d-a0d6-0353ecd1a15d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a541d5d7-eb10-4423-aa8c-b3558cfb9b67"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a541d5d7-eb10-4423-aa8c-b3558cfb9b67",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "21989af1-a773-4733-bced-100bb1ab033d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "21989af1-a773-4733-bced-100bb1ab033d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:47.184 22:46:26 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:47.184 22:46:26 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:47.184 22:46:26 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:47.184 22:46:26 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61607 00:05:47.184 22:46:26 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61607 ']' 00:05:47.184 22:46:26 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61607 00:05:47.184 22:46:26 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:47.184 22:46:26 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.184 22:46:26 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61607 00:05:47.184 22:46:26 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.184 22:46:26 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.184 killing process with pid 61607 00:05:47.184 22:46:26 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61607' 00:05:47.184 22:46:26 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61607 00:05:47.184 22:46:26 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61607 00:05:48.559 22:46:27 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:48.559 22:46:27 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:48.559 22:46:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:48.559 22:46:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.559 22:46:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.559 ************************************ 00:05:48.559 START TEST bdev_hello_world 00:05:48.559 ************************************ 00:05:48.559 22:46:27 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:48.559 [2024-12-13 22:46:27.606387] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:48.559 [2024-12-13 22:46:27.606495] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61691 ] 00:05:48.817 [2024-12-13 22:46:27.753497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.817 [2024-12-13 22:46:27.834192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.383 [2024-12-13 22:46:28.328141] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:49.383 [2024-12-13 22:46:28.328184] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:49.383 [2024-12-13 22:46:28.328199] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:49.383 [2024-12-13 22:46:28.330134] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:49.383 [2024-12-13 22:46:28.330658] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:49.384 [2024-12-13 22:46:28.330681] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:49.384 [2024-12-13 22:46:28.330906] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:49.384 00:05:49.384 [2024-12-13 22:46:28.330927] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:49.949 00:05:49.949 real 0m1.342s 00:05:49.949 user 0m1.094s 00:05:49.949 sys 0m0.142s 00:05:49.949 22:46:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.949 22:46:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:49.949 ************************************ 00:05:49.949 END TEST bdev_hello_world 00:05:49.949 ************************************ 00:05:49.949 22:46:28 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:05:49.949 22:46:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:49.949 22:46:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.949 22:46:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.949 ************************************ 00:05:49.949 START TEST bdev_bounds 00:05:49.949 ************************************ 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61722 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.949 Process bdevio pid: 61722 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61722' 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61722 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61722 ']' 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.949 22:46:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:49.949 [2024-12-13 22:46:28.997706] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:49.949 [2024-12-13 22:46:28.997838] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61722 ] 00:05:50.207 [2024-12-13 22:46:29.153060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.207 [2024-12-13 22:46:29.234975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.207 [2024-12-13 22:46:29.235341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.207 [2024-12-13 22:46:29.235364] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.774 22:46:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.774 22:46:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:50.774 22:46:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:50.774 I/O targets: 00:05:50.774 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:50.774 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:50.774 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:50.774 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:50.774 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:50.774 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:50.774 00:05:50.774 00:05:50.774 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.774 http://cunit.sourceforge.net/ 00:05:50.774 00:05:50.774 00:05:50.774 Suite: bdevio tests on: Nvme3n1 00:05:50.774 Test: blockdev write read block ...passed 00:05:51.033 Test: blockdev write zeroes read block ...passed 00:05:51.033 Test: blockdev write zeroes read no split ...passed 00:05:51.033 Test: blockdev write zeroes read split ...passed 00:05:51.033 Test: blockdev write zeroes read split partial ...passed 00:05:51.033 Test: blockdev reset ...[2024-12-13 22:46:29.953730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:51.033 [2024-12-13 22:46:29.956666] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:51.033 passed 00:05:51.033 Test: blockdev write read 8 blocks ...passed 00:05:51.033 Test: blockdev write read size > 128k ...passed 00:05:51.033 Test: blockdev write read invalid size ...passed 00:05:51.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:51.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:51.033 Test: blockdev write read max offset ...passed 00:05:51.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:51.033 Test: blockdev writev readv 8 blocks ...passed 00:05:51.033 Test: blockdev writev readv 30 x 1block ...passed 00:05:51.033 Test: blockdev writev readv block ...passed 00:05:51.033 Test: blockdev writev readv size > 128k ...passed 00:05:51.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:51.033 Test: blockdev comparev and writev ...[2024-12-13 22:46:29.963283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb80a000 len:0x1000 00:05:51.033 [2024-12-13 22:46:29.963328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:51.033 passed 00:05:51.033 Test: blockdev nvme passthru rw ...passed 00:05:51.033 Test: blockdev nvme passthru vendor specific ...[2024-12-13 22:46:29.963877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:51.033 [2024-12-13 22:46:29.963907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:51.033 passed 00:05:51.033 Test: blockdev nvme admin passthru ...passed 00:05:51.033 Test: blockdev copy ...passed 00:05:51.033 Suite: bdevio tests on: Nvme2n3 00:05:51.033 Test: blockdev write read block ...passed 00:05:51.033 Test: blockdev write zeroes read block ...passed 00:05:51.033 Test: blockdev write zeroes read no split ...passed 00:05:51.033 Test: blockdev write zeroes read split ...passed 00:05:51.033 Test: blockdev write zeroes read split partial ...passed 00:05:51.033 Test: blockdev reset ...[2024-12-13 22:46:30.005105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:51.033 [2024-12-13 22:46:30.008535] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:51.033 passed 00:05:51.033 Test: blockdev write read 8 blocks ...passed 00:05:51.033 Test: blockdev write read size > 128k ...passed 00:05:51.033 Test: blockdev write read invalid size ...passed 00:05:51.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:51.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:51.033 Test: blockdev write read max offset ...passed 00:05:51.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:51.033 Test: blockdev writev readv 8 blocks ...passed 00:05:51.033 Test: blockdev writev readv 30 x 1block ...passed 00:05:51.033 Test: blockdev writev readv block ...passed 00:05:51.033 Test: blockdev writev readv size > 128k ...passed 00:05:51.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:51.033 Test: blockdev comparev and writev ...[2024-12-13 22:46:30.016974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29ea06000 len:0x1000 00:05:51.033 [2024-12-13 22:46:30.017018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:51.033 passed 00:05:51.033 Test: blockdev nvme passthru rw ...passed 00:05:51.033 Test: blockdev nvme passthru vendor specific ...[2024-12-13 22:46:30.017693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:51.033 [2024-12-13 22:46:30.017712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:51.033 passed 00:05:51.033 Test: blockdev nvme admin passthru ...passed 00:05:51.033 Test: blockdev copy ...passed 00:05:51.033 Suite: bdevio tests on: Nvme2n2 00:05:51.033 Test: blockdev write read block ...passed 00:05:51.033 Test: blockdev write zeroes read block ...passed 00:05:51.033 Test: blockdev write zeroes read no split ...passed 00:05:51.033 Test: blockdev write zeroes read split ...passed 00:05:51.033 Test: blockdev write zeroes read split partial ...passed 00:05:51.033 Test: blockdev reset ...[2024-12-13 22:46:30.078156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:51.033 [2024-12-13 22:46:30.081284] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:51.033 passed 00:05:51.033 Test: blockdev write read 8 blocks ...passed 00:05:51.033 Test: blockdev write read size > 128k ...passed 00:05:51.033 Test: blockdev write read invalid size ...passed 00:05:51.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:51.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:51.033 Test: blockdev write read max offset ...passed 00:05:51.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:51.033 Test: blockdev writev readv 8 blocks ...passed 00:05:51.033 Test: blockdev writev readv 30 x 1block ...passed 00:05:51.033 Test: blockdev writev readv block ...passed 00:05:51.033 Test: blockdev writev readv size > 128k ...passed 00:05:51.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:51.033 Test: blockdev comparev and writev ...[2024-12-13 22:46:30.088406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d643c000 len:0x1000 00:05:51.034 [2024-12-13 22:46:30.088446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:51.034 passed 00:05:51.034 Test: blockdev nvme passthru rw ...passed 00:05:51.034 Test: blockdev nvme passthru vendor specific ...[2024-12-13 22:46:30.089132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:51.034 passed 00:05:51.034 Test: blockdev nvme admin passthru ...[2024-12-13 22:46:30.089160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:51.034 passed 00:05:51.034 Test: blockdev copy ...passed 00:05:51.034 Suite: bdevio tests on: Nvme2n1 00:05:51.034 Test: blockdev write read block ...passed 00:05:51.034 Test: blockdev write zeroes read block ...passed 00:05:51.034 Test: blockdev write zeroes read no split ...passed 00:05:51.034 Test: blockdev write zeroes read split ...passed 00:05:51.034 Test: blockdev write zeroes read split partial ...passed 00:05:51.034 Test: blockdev reset ...[2024-12-13 22:46:30.145883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:51.034 [2024-12-13 22:46:30.148799] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:51.034 passed 00:05:51.034 Test: blockdev write read 8 blocks ...passed 00:05:51.034 Test: blockdev write read size > 128k ...passed 00:05:51.034 Test: blockdev write read invalid size ...passed 00:05:51.034 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:51.034 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:51.034 Test: blockdev write read max offset ...passed 00:05:51.034 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:51.034 Test: blockdev writev readv 8 blocks ...passed 00:05:51.034 Test: blockdev writev readv 30 x 1block ...passed 00:05:51.034 Test: blockdev writev readv block ...passed 00:05:51.034 Test: blockdev writev readv size > 128k ...passed 00:05:51.034 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:51.034 Test: blockdev comparev and writev ...[2024-12-13 22:46:30.154544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6438000 len:0x1000 00:05:51.034 [2024-12-13 22:46:30.154587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:51.034 passed 00:05:51.034 Test: blockdev nvme passthru rw ...passed 00:05:51.034 Test: blockdev nvme passthru vendor specific ...[2024-12-13 22:46:30.155069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:51.034 [2024-12-13 22:46:30.155088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:51.034 passed 00:05:51.034 Test: blockdev nvme admin passthru ...passed 00:05:51.034 Test: blockdev copy ...passed 00:05:51.034 Suite: bdevio tests on: Nvme1n1 00:05:51.034 Test: blockdev write read block ...passed 00:05:51.034 Test: blockdev write zeroes read block ...passed 00:05:51.034 Test: blockdev write zeroes read no split ...passed 00:05:51.292 Test: blockdev write zeroes read split ...passed 00:05:51.292 Test: blockdev write zeroes read split partial ...passed 00:05:51.292 Test: blockdev reset ...[2024-12-13 22:46:30.199914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:51.292 [2024-12-13 22:46:30.202930] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:51.292 passed 00:05:51.292 Test: blockdev write read 8 blocks ...passed 00:05:51.292 Test: blockdev write read size > 128k ...passed 00:05:51.292 Test: blockdev write read invalid size ...passed 00:05:51.292 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:51.292 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:51.292 Test: blockdev write read max offset ...passed 00:05:51.292 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:51.292 Test: blockdev writev readv 8 blocks ...passed 00:05:51.292 Test: blockdev writev readv 30 x 1block ...passed 00:05:51.292 Test: blockdev writev readv block ...passed 00:05:51.292 Test: blockdev writev readv size > 128k ...passed 00:05:51.292 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:51.292 Test: blockdev comparev and writev ...[2024-12-13 22:46:30.208696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6434000 len:0x1000 00:05:51.292 [2024-12-13 22:46:30.208751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:51.292 passed 00:05:51.292 Test: blockdev nvme passthru rw ...passed 00:05:51.292 Test: blockdev nvme passthru vendor specific ...[2024-12-13 22:46:30.209327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:51.292 [2024-12-13 22:46:30.209346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:51.292 passed 00:05:51.292 Test: blockdev nvme admin passthru ...passed 00:05:51.292 Test: blockdev copy ...passed 00:05:51.292 Suite: bdevio tests on: Nvme0n1 00:05:51.292 Test: blockdev write read block ...passed 00:05:51.292 Test: blockdev write zeroes read block ...passed 00:05:51.292 Test: blockdev write zeroes read no split ...passed 00:05:51.292 Test: blockdev write zeroes read split ...passed 00:05:51.292 Test: blockdev write zeroes read split partial ...passed 00:05:51.292 Test: blockdev reset ...[2024-12-13 22:46:30.253769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:51.292 [2024-12-13 22:46:30.256484] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:51.292 passed 00:05:51.292 Test: blockdev write read 8 blocks ...passed 00:05:51.292 Test: blockdev write read size > 128k ...passed 00:05:51.292 Test: blockdev write read invalid size ...passed 00:05:51.292 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:51.292 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:51.292 Test: blockdev write read max offset ...passed 00:05:51.292 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:51.292 Test: blockdev writev readv 8 blocks ...passed 00:05:51.292 Test: blockdev writev readv 30 x 1block ...passed 00:05:51.292 Test: blockdev writev readv block ...passed 00:05:51.292 Test: blockdev writev readv size > 128k ...passed 00:05:51.292 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:51.292 Test: blockdev comparev and writev ...[2024-12-13 22:46:30.261842] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:51.292 separate metadata which is not supported yet. 00:05:51.292 passed 00:05:51.292 Test: blockdev nvme passthru rw ...passed 00:05:51.292 Test: blockdev nvme passthru vendor specific ...passed 00:05:51.292 Test: blockdev nvme admin passthru ...[2024-12-13 22:46:30.262267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:51.292 [2024-12-13 22:46:30.262311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:51.292 passed 00:05:51.292 Test: blockdev copy ...passed 00:05:51.292 00:05:51.292 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.292 suites 6 6 n/a 0 0 00:05:51.292 tests 138 138 138 0 0 00:05:51.292 asserts 893 893 893 0 n/a 00:05:51.292 00:05:51.292 Elapsed time = 0.953 seconds 00:05:51.292 0 00:05:51.292 22:46:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61722 00:05:51.292 22:46:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61722 ']' 00:05:51.292 22:46:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61722 00:05:51.292 22:46:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:51.292 22:46:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.292 22:46:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61722 00:05:51.292 killing process with pid 61722 00:05:51.292 22:46:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.292 22:46:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.292 22:46:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61722' 00:05:51.292 22:46:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61722 00:05:51.292 22:46:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61722 00:05:52.227 22:46:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:52.227 00:05:52.227 real 0m2.077s 00:05:52.227 user 0m5.337s 00:05:52.227 sys 0m0.276s 00:05:52.227 22:46:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.227 22:46:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:52.227 ************************************ 00:05:52.227 END TEST bdev_bounds 00:05:52.227 ************************************ 00:05:52.228 22:46:31 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:52.228 22:46:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:52.228 22:46:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.228 22:46:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:52.228 ************************************ 00:05:52.228 START TEST bdev_nbd 00:05:52.228 ************************************ 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61776 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61776 /var/tmp/spdk-nbd.sock 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61776 ']' 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:52.228 22:46:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:52.228 [2024-12-13 22:46:31.121318] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:52.228 [2024-12-13 22:46:31.121874] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:52.228 [2024-12-13 22:46:31.284273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.489 [2024-12-13 22:46:31.399556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:53.055 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:53.332 1+0 records in 00:05:53.332 1+0 records out 00:05:53.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026397 s, 15.5 MB/s 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:53.332 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:53.591 1+0 records in 00:05:53.591 1+0 records out 00:05:53.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520429 s, 7.9 MB/s 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:53.591 1+0 records in 00:05:53.591 1+0 records out 00:05:53.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368369 s, 11.1 MB/s 00:05:53.591 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:53.849 1+0 records in 00:05:53.849 1+0 records out 00:05:53.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445424 s, 9.2 MB/s 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:53.849 22:46:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:54.107 1+0 records in 00:05:54.107 1+0 records out 00:05:54.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519301 s, 7.9 MB/s 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:54.107 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:54.364 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:54.364 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:54.365 1+0 records in 00:05:54.365 1+0 records out 00:05:54.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048135 s, 8.5 MB/s 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:54.365 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.623 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd0", 00:05:54.623 "bdev_name": "Nvme0n1" 00:05:54.623 }, 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd1", 00:05:54.623 "bdev_name": "Nvme1n1" 00:05:54.623 }, 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd2", 00:05:54.623 "bdev_name": "Nvme2n1" 00:05:54.623 }, 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd3", 00:05:54.623 "bdev_name": "Nvme2n2" 00:05:54.623 }, 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd4", 00:05:54.623 "bdev_name": "Nvme2n3" 00:05:54.623 }, 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd5", 00:05:54.623 "bdev_name": "Nvme3n1" 00:05:54.623 } 00:05:54.623 ]' 00:05:54.623 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:54.623 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd0", 00:05:54.623 "bdev_name": "Nvme0n1" 00:05:54.623 }, 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd1", 00:05:54.623 "bdev_name": "Nvme1n1" 00:05:54.623 }, 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd2", 00:05:54.623 "bdev_name": "Nvme2n1" 00:05:54.623 }, 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd3", 00:05:54.623 "bdev_name": "Nvme2n2" 00:05:54.623 }, 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd4", 00:05:54.623 "bdev_name": "Nvme2n3" 00:05:54.623 }, 00:05:54.623 { 00:05:54.623 "nbd_device": "/dev/nbd5", 00:05:54.623 "bdev_name": "Nvme3n1" 00:05:54.623 } 00:05:54.623 ]' 00:05:54.623 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:54.623 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:54.623 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.623 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:54.623 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.623 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:54.623 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.623 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.882 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.882 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.882 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.882 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.882 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.882 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.882 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:54.882 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.882 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.882 22:46:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.140 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.140 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.140 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.140 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.140 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.140 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.140 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:55.140 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.140 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.140 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.400 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:55.659 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:55.659 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:55.659 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:55.659 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.659 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.659 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:55.659 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:55.659 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.659 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.659 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:55.919 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:55.919 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:55.919 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:55.919 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.919 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.919 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:55.919 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:55.919 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.919 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.919 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.919 22:46:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:56.177 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:56.435 /dev/nbd0 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.435 1+0 records in 00:05:56.435 1+0 records out 00:05:56.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364517 s, 11.2 MB/s 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:56.435 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:56.693 /dev/nbd1 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.693 1+0 records in 00:05:56.693 1+0 records out 00:05:56.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048559 s, 8.4 MB/s 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:56.693 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:56.951 /dev/nbd10 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.951 1+0 records in 00:05:56.951 1+0 records out 00:05:56.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458002 s, 8.9 MB/s 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:56.951 22:46:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:57.210 /dev/nbd11 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.210 1+0 records in 00:05:57.210 1+0 records out 00:05:57.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564081 s, 7.3 MB/s 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:57.210 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:57.210 /dev/nbd12 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.469 1+0 records in 00:05:57.469 1+0 records out 00:05:57.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406678 s, 10.1 MB/s 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:57.469 /dev/nbd13 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.469 1+0 records in 00:05:57.469 1+0 records out 00:05:57.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367402 s, 11.1 MB/s 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.469 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd0", 00:05:57.728 "bdev_name": "Nvme0n1" 00:05:57.728 }, 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd1", 00:05:57.728 "bdev_name": "Nvme1n1" 00:05:57.728 }, 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd10", 00:05:57.728 "bdev_name": "Nvme2n1" 00:05:57.728 }, 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd11", 00:05:57.728 "bdev_name": "Nvme2n2" 00:05:57.728 }, 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd12", 00:05:57.728 "bdev_name": "Nvme2n3" 00:05:57.728 }, 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd13", 00:05:57.728 "bdev_name": "Nvme3n1" 00:05:57.728 } 00:05:57.728 ]' 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd0", 00:05:57.728 "bdev_name": "Nvme0n1" 00:05:57.728 }, 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd1", 00:05:57.728 "bdev_name": "Nvme1n1" 00:05:57.728 }, 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd10", 00:05:57.728 "bdev_name": "Nvme2n1" 00:05:57.728 }, 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd11", 00:05:57.728 "bdev_name": "Nvme2n2" 00:05:57.728 }, 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd12", 00:05:57.728 "bdev_name": "Nvme2n3" 00:05:57.728 }, 00:05:57.728 { 00:05:57.728 "nbd_device": "/dev/nbd13", 00:05:57.728 "bdev_name": "Nvme3n1" 00:05:57.728 } 00:05:57.728 ]' 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.728 /dev/nbd1 00:05:57.728 /dev/nbd10 00:05:57.728 /dev/nbd11 00:05:57.728 /dev/nbd12 00:05:57.728 /dev/nbd13' 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.728 /dev/nbd1 00:05:57.728 /dev/nbd10 00:05:57.728 /dev/nbd11 00:05:57.728 /dev/nbd12 00:05:57.728 /dev/nbd13' 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:57.728 256+0 records in 00:05:57.728 256+0 records out 00:05:57.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120114 s, 87.3 MB/s 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.728 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.987 256+0 records in 00:05:57.987 256+0 records out 00:05:57.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0637062 s, 16.5 MB/s 00:05:57.987 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.987 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.987 256+0 records in 00:05:57.987 256+0 records out 00:05:57.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0714801 s, 14.7 MB/s 00:05:57.987 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.987 22:46:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:57.987 256+0 records in 00:05:57.987 256+0 records out 00:05:57.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0642388 s, 16.3 MB/s 00:05:57.987 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.987 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:58.246 256+0 records in 00:05:58.246 256+0 records out 00:05:58.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0631799 s, 16.6 MB/s 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:58.246 256+0 records in 00:05:58.246 256+0 records out 00:05:58.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0687884 s, 15.2 MB/s 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:58.246 256+0 records in 00:05:58.246 256+0 records out 00:05:58.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0681253 s, 15.4 MB/s 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.246 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.504 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.504 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.504 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.504 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.504 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.504 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.504 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.504 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.504 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.504 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.762 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.762 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.762 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.762 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.762 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.762 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.762 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.762 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.762 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.762 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:59.020 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:59.020 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:59.020 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:59.020 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.020 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.020 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:59.020 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.020 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.020 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.020 22:46:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:59.020 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:59.020 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:59.020 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:59.020 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.020 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.020 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:59.020 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.020 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.020 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.020 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:59.279 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:59.279 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:59.279 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:59.279 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.279 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.279 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:59.279 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.279 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.279 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.279 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:59.537 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:59.537 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:59.537 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:59.537 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.538 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.538 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:59.538 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.538 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.538 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.538 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.538 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:59.796 22:46:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:00.062 malloc_lvol_verify 00:06:00.062 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:00.322 607a32dc-1446-486f-84c1-fed663a5ec1c 00:06:00.322 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:00.322 f83f3934-e573-4c63-a1a8-baeafd2f9253 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:00.580 /dev/nbd0 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:00.580 mke2fs 1.47.0 (5-Feb-2023) 00:06:00.580 Discarding device blocks: 0/4096 done 00:06:00.580 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:00.580 00:06:00.580 Allocating group tables: 0/1 done 00:06:00.580 Writing inode tables: 0/1 done 00:06:00.580 Creating journal (1024 blocks): done 00:06:00.580 Writing superblocks and filesystem accounting information: 0/1 done 00:06:00.580 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.580 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61776 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61776 ']' 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61776 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61776 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61776' 00:06:00.838 killing process with pid 61776 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61776 00:06:00.838 22:46:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61776 00:06:01.403 22:46:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:01.403 00:06:01.403 real 0m9.484s 00:06:01.403 user 0m13.675s 00:06:01.403 sys 0m2.990s 00:06:01.403 22:46:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.403 22:46:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:01.403 ************************************ 00:06:01.403 END TEST bdev_nbd 00:06:01.403 ************************************ 00:06:01.661 22:46:40 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:01.661 22:46:40 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:01.661 skipping fio tests on NVMe due to multi-ns failures. 00:06:01.661 22:46:40 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:01.661 22:46:40 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:01.661 22:46:40 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:01.661 22:46:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:01.661 22:46:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.661 22:46:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:01.661 ************************************ 00:06:01.661 START TEST bdev_verify 00:06:01.661 ************************************ 00:06:01.661 22:46:40 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:01.661 [2024-12-13 22:46:40.634232] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:01.661 [2024-12-13 22:46:40.634327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62148 ] 00:06:01.661 [2024-12-13 22:46:40.784752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.919 [2024-12-13 22:46:40.871017] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.919 [2024-12-13 22:46:40.871127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.484 Running I/O for 5 seconds... 00:06:04.856 24832.00 IOPS, 97.00 MiB/s [2024-12-13T22:46:44.931Z] 25056.00 IOPS, 97.88 MiB/s [2024-12-13T22:46:45.864Z] 23978.67 IOPS, 93.67 MiB/s [2024-12-13T22:46:46.797Z] 23072.00 IOPS, 90.12 MiB/s [2024-12-13T22:46:46.797Z] 22681.60 IOPS, 88.60 MiB/s 00:06:07.657 Latency(us) 00:06:07.657 [2024-12-13T22:46:46.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:07.657 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.657 Verification LBA range: start 0x0 length 0xbd0bd 00:06:07.657 Nvme0n1 : 5.06 1857.65 7.26 0.00 0.00 68578.12 8822.15 73400.32 00:06:07.658 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.658 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:07.658 Nvme0n1 : 5.06 1884.68 7.36 0.00 0.00 67578.44 7057.72 69770.63 00:06:07.658 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.658 Verification LBA range: start 0x0 length 0xa0000 00:06:07.658 Nvme1n1 : 5.08 1864.54 7.28 0.00 0.00 68420.99 12351.02 72593.72 00:06:07.658 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.658 Verification LBA range: start 0xa0000 length 0xa0000 00:06:07.658 Nvme1n1 : 5.07 1893.07 7.39 0.00 0.00 67313.54 7461.02 66947.54 00:06:07.658 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.658 Verification LBA range: start 0x0 length 0x80000 00:06:07.658 Nvme2n1 : 5.08 1863.99 7.28 0.00 0.00 68355.95 10889.06 71787.13 00:06:07.658 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.658 Verification LBA range: start 0x80000 length 0x80000 00:06:07.658 Nvme2n1 : 5.07 1892.12 7.39 0.00 0.00 67202.41 9074.22 63317.86 00:06:07.658 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.658 Verification LBA range: start 0x0 length 0x80000 00:06:07.658 Nvme2n2 : 5.08 1863.46 7.28 0.00 0.00 68275.62 11191.53 68560.74 00:06:07.658 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.658 Verification LBA range: start 0x80000 length 0x80000 00:06:07.658 Nvme2n2 : 5.07 1891.65 7.39 0.00 0.00 67083.26 9225.45 62914.56 00:06:07.658 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.658 Verification LBA range: start 0x0 length 0x80000 00:06:07.658 Nvme2n3 : 5.09 1862.35 7.27 0.00 0.00 68197.98 13107.20 69770.63 00:06:07.658 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.658 Verification LBA range: start 0x80000 length 0x80000 00:06:07.658 Nvme2n3 : 5.08 1891.25 7.39 0.00 0.00 66998.10 9477.51 66947.54 00:06:07.658 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.658 Verification LBA range: start 0x0 length 0x20000 00:06:07.658 Nvme3n1 : 5.09 1861.25 7.27 0.00 0.00 68115.71 13107.20 73803.62 00:06:07.658 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.658 Verification LBA range: start 0x20000 length 0x20000 00:06:07.658 Nvme3n1 : 5.08 1890.70 7.39 0.00 0.00 66938.65 9275.86 69367.34 00:06:07.658 [2024-12-13T22:46:46.798Z] =================================================================================================================== 00:06:07.658 [2024-12-13T22:46:46.798Z] Total : 22516.71 87.96 0.00 0.00 67750.71 7057.72 73803.62 00:06:09.036 00:06:09.036 real 0m7.187s 00:06:09.036 user 0m13.478s 00:06:09.036 sys 0m0.218s 00:06:09.036 ************************************ 00:06:09.036 END TEST bdev_verify 00:06:09.036 ************************************ 00:06:09.036 22:46:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.036 22:46:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:09.036 22:46:47 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:09.036 22:46:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:09.036 22:46:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.036 22:46:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.036 ************************************ 00:06:09.036 START TEST bdev_verify_big_io 00:06:09.036 ************************************ 00:06:09.036 22:46:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:09.036 [2024-12-13 22:46:47.927036] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:09.036 [2024-12-13 22:46:47.927229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62241 ] 00:06:09.036 [2024-12-13 22:46:48.107091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.295 [2024-12-13 22:46:48.264990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.295 [2024-12-13 22:46:48.265093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.861 Running I/O for 5 seconds... 00:06:15.953 1285.00 IOPS, 80.31 MiB/s [2024-12-13T22:46:55.352Z] 3238.00 IOPS, 202.38 MiB/s 00:06:16.212 Latency(us) 00:06:16.212 [2024-12-13T22:46:55.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:16.212 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0x0 length 0xbd0b 00:06:16.212 Nvme0n1 : 5.54 138.68 8.67 0.00 0.00 883898.75 25710.28 1000180.18 00:06:16.212 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:16.212 Nvme0n1 : 5.85 90.90 5.68 0.00 0.00 1330534.22 11594.83 1374441.16 00:06:16.212 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0x0 length 0xa000 00:06:16.212 Nvme1n1 : 5.75 137.59 8.60 0.00 0.00 858619.62 100018.02 858219.13 00:06:16.212 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0xa000 length 0xa000 00:06:16.212 Nvme1n1 : 5.94 96.95 6.06 0.00 0.00 1212281.57 85095.98 1187310.67 00:06:16.212 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0x0 length 0x8000 00:06:16.212 Nvme2n1 : 5.84 146.29 9.14 0.00 0.00 798104.87 30650.68 1032444.06 00:06:16.212 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0x8000 length 0x8000 00:06:16.212 Nvme2n1 : 6.00 102.97 6.44 0.00 0.00 1095465.32 29440.79 1193763.45 00:06:16.212 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0x0 length 0x8000 00:06:16.212 Nvme2n2 : 5.89 148.99 9.31 0.00 0.00 757294.03 32667.18 877577.45 00:06:16.212 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0x8000 length 0x8000 00:06:16.212 Nvme2n2 : 6.02 106.98 6.69 0.00 0.00 1007812.64 29440.79 1200216.22 00:06:16.212 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0x0 length 0x8000 00:06:16.212 Nvme2n3 : 5.89 152.15 9.51 0.00 0.00 725035.89 46379.32 896935.78 00:06:16.212 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0x8000 length 0x8000 00:06:16.212 Nvme2n3 : 6.05 113.64 7.10 0.00 0.00 929901.10 20366.57 2310093.59 00:06:16.212 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0x0 length 0x2000 00:06:16.212 Nvme3n1 : 5.96 168.18 10.51 0.00 0.00 641888.31 787.69 967916.31 00:06:16.212 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:16.212 Verification LBA range: start 0x2000 length 0x2000 00:06:16.212 Nvme3n1 : 6.14 157.60 9.85 0.00 0.00 646411.55 630.15 2348810.24 00:06:16.212 [2024-12-13T22:46:55.352Z] =================================================================================================================== 00:06:16.212 [2024-12-13T22:46:55.352Z] Total : 1560.92 97.56 0.00 0.00 867612.40 630.15 2348810.24 00:06:17.585 00:06:17.585 real 0m8.612s 00:06:17.585 user 0m15.950s 00:06:17.585 sys 0m0.320s 00:06:17.585 22:46:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.585 ************************************ 00:06:17.585 END TEST bdev_verify_big_io 00:06:17.585 ************************************ 00:06:17.585 22:46:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:17.585 22:46:56 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:17.585 22:46:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:17.585 22:46:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.585 22:46:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.585 ************************************ 00:06:17.585 START TEST bdev_write_zeroes 00:06:17.585 ************************************ 00:06:17.585 22:46:56 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:17.585 [2024-12-13 22:46:56.537729] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:17.585 [2024-12-13 22:46:56.537841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62358 ] 00:06:17.585 [2024-12-13 22:46:56.698193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.843 [2024-12-13 22:46:56.796527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.409 Running I/O for 1 seconds... 00:06:19.340 76032.00 IOPS, 297.00 MiB/s 00:06:19.340 Latency(us) 00:06:19.340 [2024-12-13T22:46:58.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.340 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:19.341 Nvme0n1 : 1.02 12588.28 49.17 0.00 0.00 10149.41 8570.09 21072.34 00:06:19.341 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:19.341 Nvme1n1 : 1.02 12573.70 49.12 0.00 0.00 10146.61 8670.92 20568.22 00:06:19.341 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:19.341 Nvme2n1 : 1.02 12559.57 49.06 0.00 0.00 10127.29 8469.27 19459.15 00:06:19.341 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:19.341 Nvme2n2 : 1.03 12545.46 49.01 0.00 0.00 10109.40 6856.07 18955.03 00:06:19.341 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:19.341 Nvme2n3 : 1.03 12531.31 48.95 0.00 0.00 10106.34 6452.78 18955.03 00:06:19.341 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:19.341 Nvme3n1 : 1.03 12517.21 48.90 0.00 0.00 10096.15 5394.12 20669.05 00:06:19.341 [2024-12-13T22:46:58.481Z] =================================================================================================================== 00:06:19.341 [2024-12-13T22:46:58.481Z] Total : 75315.52 294.20 0.00 0.00 10122.53 5394.12 21072.34 00:06:20.301 00:06:20.301 real 0m2.675s 00:06:20.301 user 0m2.382s 00:06:20.301 sys 0m0.178s 00:06:20.301 22:46:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.301 ************************************ 00:06:20.301 END TEST bdev_write_zeroes 00:06:20.301 ************************************ 00:06:20.301 22:46:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:20.301 22:46:59 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:20.301 22:46:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:20.301 22:46:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.301 22:46:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.301 ************************************ 00:06:20.301 START TEST bdev_json_nonenclosed 00:06:20.301 ************************************ 00:06:20.301 22:46:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:20.301 [2024-12-13 22:46:59.256833] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:20.301 [2024-12-13 22:46:59.256945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62405 ] 00:06:20.301 [2024-12-13 22:46:59.417622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.560 [2024-12-13 22:46:59.516979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.560 [2024-12-13 22:46:59.517060] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:20.561 [2024-12-13 22:46:59.517077] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:20.561 [2024-12-13 22:46:59.517086] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.821 00:06:20.821 real 0m0.510s 00:06:20.821 user 0m0.312s 00:06:20.821 sys 0m0.094s 00:06:20.821 ************************************ 00:06:20.821 END TEST bdev_json_nonenclosed 00:06:20.821 ************************************ 00:06:20.821 22:46:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.821 22:46:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:20.821 22:46:59 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:20.821 22:46:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:20.821 22:46:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.821 22:46:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.821 ************************************ 00:06:20.821 START TEST bdev_json_nonarray 00:06:20.821 ************************************ 00:06:20.821 22:46:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:20.821 [2024-12-13 22:46:59.811789] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:20.821 [2024-12-13 22:46:59.811877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62431 ] 00:06:21.082 [2024-12-13 22:46:59.965774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.082 [2024-12-13 22:47:00.081175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.082 [2024-12-13 22:47:00.081282] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:21.082 [2024-12-13 22:47:00.081300] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:21.082 [2024-12-13 22:47:00.081315] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.344 00:06:21.344 real 0m0.510s 00:06:21.344 user 0m0.320s 00:06:21.344 sys 0m0.085s 00:06:21.344 22:47:00 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.344 22:47:00 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:21.344 ************************************ 00:06:21.344 END TEST bdev_json_nonarray 00:06:21.344 ************************************ 00:06:21.344 22:47:00 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:21.344 22:47:00 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:21.344 22:47:00 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:21.344 22:47:00 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:21.344 22:47:00 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:21.344 22:47:00 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:21.344 22:47:00 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:21.344 22:47:00 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:21.344 22:47:00 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:21.344 22:47:00 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:21.344 22:47:00 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:21.344 00:06:21.344 real 0m35.765s 00:06:21.344 user 0m55.657s 00:06:21.344 sys 0m5.008s 00:06:21.344 22:47:00 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.344 ************************************ 00:06:21.344 END TEST blockdev_nvme 00:06:21.344 ************************************ 00:06:21.344 22:47:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.344 22:47:00 -- spdk/autotest.sh@209 -- # uname -s 00:06:21.344 22:47:00 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:21.344 22:47:00 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:21.344 22:47:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:21.344 22:47:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.344 22:47:00 -- common/autotest_common.sh@10 -- # set +x 00:06:21.344 ************************************ 00:06:21.344 START TEST blockdev_nvme_gpt 00:06:21.344 ************************************ 00:06:21.344 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:21.344 * Looking for test storage... 00:06:21.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:21.605 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.605 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.605 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.605 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.605 22:47:00 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:21.605 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.605 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.605 --rc genhtml_branch_coverage=1 00:06:21.605 --rc genhtml_function_coverage=1 00:06:21.605 --rc genhtml_legend=1 00:06:21.605 --rc geninfo_all_blocks=1 00:06:21.605 --rc geninfo_unexecuted_blocks=1 00:06:21.605 00:06:21.605 ' 00:06:21.605 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.605 --rc genhtml_branch_coverage=1 00:06:21.605 --rc genhtml_function_coverage=1 00:06:21.605 --rc genhtml_legend=1 00:06:21.605 --rc geninfo_all_blocks=1 00:06:21.605 --rc geninfo_unexecuted_blocks=1 00:06:21.605 00:06:21.605 ' 00:06:21.605 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.605 --rc genhtml_branch_coverage=1 00:06:21.605 --rc genhtml_function_coverage=1 00:06:21.605 --rc genhtml_legend=1 00:06:21.605 --rc geninfo_all_blocks=1 00:06:21.605 --rc geninfo_unexecuted_blocks=1 00:06:21.605 00:06:21.605 ' 00:06:21.605 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.605 --rc genhtml_branch_coverage=1 00:06:21.605 --rc genhtml_function_coverage=1 00:06:21.605 --rc genhtml_legend=1 00:06:21.605 --rc geninfo_all_blocks=1 00:06:21.605 --rc geninfo_unexecuted_blocks=1 00:06:21.605 00:06:21.605 ' 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:21.605 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:21.606 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:21.606 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:21.606 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:21.606 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62515 00:06:21.606 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:21.606 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62515 00:06:21.606 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62515 ']' 00:06:21.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.606 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.606 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.606 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.606 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.606 22:47:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:21.606 22:47:00 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:21.606 [2024-12-13 22:47:00.661813] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:21.606 [2024-12-13 22:47:00.661962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62515 ] 00:06:21.866 [2024-12-13 22:47:00.826331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.866 [2024-12-13 22:47:00.964283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.806 22:47:01 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.806 22:47:01 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:22.806 22:47:01 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:22.806 22:47:01 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:22.807 22:47:01 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:23.065 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:23.065 Waiting for block devices as requested 00:06:23.065 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:23.324 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:23.324 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:23.324 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:28.639 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:28.639 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:28.639 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:28.639 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:28.639 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:28.639 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:28.639 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:28.639 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:28.639 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:28.640 BYT; 00:06:28.640 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:28.640 BYT; 00:06:28.640 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:28.640 22:47:07 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:28.640 22:47:07 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:29.581 The operation has completed successfully. 00:06:29.581 22:47:08 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:30.966 The operation has completed successfully. 00:06:30.966 22:47:09 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:31.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:31.559 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:31.559 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:31.559 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:31.820 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:31.820 22:47:10 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:31.820 22:47:10 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.820 22:47:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:31.820 [] 00:06:31.820 22:47:10 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.820 22:47:10 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:31.820 22:47:10 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:31.820 22:47:10 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:31.820 22:47:10 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:31.820 22:47:10 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:31.820 22:47:10 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.820 22:47:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.080 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.080 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:32.080 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.080 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.080 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.080 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:32.080 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:32.080 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.080 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:32.339 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.339 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:32.339 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:32.340 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3d3e0fd5-4347-4b57-8f57-4c90f3f3e68c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3d3e0fd5-4347-4b57-8f57-4c90f3f3e68c",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "aa80f565-4aad-419c-8982-46d738c2e20a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aa80f565-4aad-419c-8982-46d738c2e20a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "75b281cd-1add-4b7d-8b03-a084317bbf22"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "75b281cd-1add-4b7d-8b03-a084317bbf22",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "91a8fc39-fb32-465c-bcbc-5ec25c7d5326"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91a8fc39-fb32-465c-bcbc-5ec25c7d5326",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "10ca651f-e941-4be2-93f5-e1f238bf936f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "10ca651f-e941-4be2-93f5-e1f238bf936f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:32.340 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:32.340 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:32.340 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:32.340 22:47:11 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62515 00:06:32.340 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62515 ']' 00:06:32.340 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62515 00:06:32.340 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:32.340 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.340 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62515 00:06:32.340 killing process with pid 62515 00:06:32.340 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.340 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.340 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62515' 00:06:32.340 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62515 00:06:32.340 22:47:11 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62515 00:06:33.715 22:47:12 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:33.715 22:47:12 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:33.715 22:47:12 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:33.715 22:47:12 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.715 22:47:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:33.715 ************************************ 00:06:33.715 START TEST bdev_hello_world 00:06:33.715 ************************************ 00:06:33.716 22:47:12 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:33.716 [2024-12-13 22:47:12.629486] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:33.716 [2024-12-13 22:47:12.630141] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63138 ] 00:06:33.716 [2024-12-13 22:47:12.788424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.974 [2024-12-13 22:47:12.899408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.541 [2024-12-13 22:47:13.420070] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:34.541 [2024-12-13 22:47:13.420134] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:34.541 [2024-12-13 22:47:13.420157] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:34.541 [2024-12-13 22:47:13.422771] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:34.542 [2024-12-13 22:47:13.423256] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:34.542 [2024-12-13 22:47:13.423286] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:34.542 [2024-12-13 22:47:13.423504] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:34.542 00:06:34.542 [2024-12-13 22:47:13.423528] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:35.107 00:06:35.107 real 0m1.637s 00:06:35.107 user 0m1.307s 00:06:35.107 sys 0m0.221s 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:35.107 ************************************ 00:06:35.107 END TEST bdev_hello_world 00:06:35.107 ************************************ 00:06:35.107 22:47:14 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:35.107 22:47:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:35.107 22:47:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.107 22:47:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.107 ************************************ 00:06:35.107 START TEST bdev_bounds 00:06:35.107 ************************************ 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:35.107 Process bdevio pid: 63174 00:06:35.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63174 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63174' 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63174 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63174 ']' 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.107 22:47:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:35.365 [2024-12-13 22:47:14.298276] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:35.365 [2024-12-13 22:47:14.298386] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63174 ] 00:06:35.365 [2024-12-13 22:47:14.459420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.625 [2024-12-13 22:47:14.577716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.625 [2024-12-13 22:47:14.579441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.625 [2024-12-13 22:47:14.579455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.194 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.194 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:36.194 22:47:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:36.194 I/O targets: 00:06:36.194 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:36.194 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:36.194 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:36.194 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:36.194 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:36.194 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:36.194 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:36.194 00:06:36.194 00:06:36.194 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.194 http://cunit.sourceforge.net/ 00:06:36.194 00:06:36.194 00:06:36.194 Suite: bdevio tests on: Nvme3n1 00:06:36.194 Test: blockdev write read block ...passed 00:06:36.194 Test: blockdev write zeroes read block ...passed 00:06:36.194 Test: blockdev write zeroes read no split ...passed 00:06:36.194 Test: blockdev write zeroes read split ...passed 00:06:36.194 Test: blockdev write zeroes read split partial ...passed 00:06:36.194 Test: blockdev reset ...[2024-12-13 22:47:15.313451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:36.194 [2024-12-13 22:47:15.318209] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:36.194 Test: blockdev write read 8 blocks ...uccessful. 00:06:36.194 passed 00:06:36.194 Test: blockdev write read size > 128k ...passed 00:06:36.194 Test: blockdev write read invalid size ...passed 00:06:36.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.194 Test: blockdev write read max offset ...passed 00:06:36.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.194 Test: blockdev writev readv 8 blocks ...passed 00:06:36.194 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.194 Test: blockdev writev readv block ...passed 00:06:36.455 Test: blockdev writev readv size > 128k ...passed 00:06:36.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.455 Test: blockdev comparev and writev ...[2024-12-13 22:47:15.336719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9004000 len:0x1000 00:06:36.455 [2024-12-13 22:47:15.336955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:36.455 passed 00:06:36.455 Test: blockdev nvme passthru rw ...passed 00:06:36.455 Test: blockdev nvme passthru vendor specific ...[2024-12-13 22:47:15.339176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:36.455 passed 00:06:36.455 Test: blockdev nvme admin passthru ...[2024-12-13 22:47:15.339281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:36.455 passed 00:06:36.455 Test: blockdev copy ...passed 00:06:36.455 Suite: bdevio tests on: Nvme2n3 00:06:36.455 Test: blockdev write read block ...passed 00:06:36.455 Test: blockdev write zeroes read block ...passed 00:06:36.455 Test: blockdev write zeroes read no split ...passed 00:06:36.455 Test: blockdev write zeroes read split ...passed 00:06:36.455 Test: blockdev write zeroes read split partial ...passed 00:06:36.455 Test: blockdev reset ...[2024-12-13 22:47:15.395209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:36.455 [2024-12-13 22:47:15.399874] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:36.455 Test: blockdev write read 8 blocks ...uccessful. 00:06:36.455 passed 00:06:36.455 Test: blockdev write read size > 128k ...passed 00:06:36.455 Test: blockdev write read invalid size ...passed 00:06:36.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.455 Test: blockdev write read max offset ...passed 00:06:36.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.455 Test: blockdev writev readv 8 blocks ...passed 00:06:36.455 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.455 Test: blockdev writev readv block ...passed 00:06:36.455 Test: blockdev writev readv size > 128k ...passed 00:06:36.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.455 Test: blockdev comparev and writev ...[2024-12-13 22:47:15.419249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9002000 len:0x1000 00:06:36.455 [2024-12-13 22:47:15.419463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:36.455 passed 00:06:36.455 Test: blockdev nvme passthru rw ...passed 00:06:36.455 Test: blockdev nvme passthru vendor specific ...[2024-12-13 22:47:15.422160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:36.455 passed 00:06:36.455 Test: blockdev nvme admin passthru ...[2024-12-13 22:47:15.422312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:36.455 passed 00:06:36.455 Test: blockdev copy ...passed 00:06:36.455 Suite: bdevio tests on: Nvme2n2 00:06:36.455 Test: blockdev write read block ...passed 00:06:36.455 Test: blockdev write zeroes read block ...passed 00:06:36.455 Test: blockdev write zeroes read no split ...passed 00:06:36.455 Test: blockdev write zeroes read split ...passed 00:06:36.455 Test: blockdev write zeroes read split partial ...passed 00:06:36.455 Test: blockdev reset ...[2024-12-13 22:47:15.478067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:36.455 [2024-12-13 22:47:15.482519] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:36.455 Test: blockdev write read 8 blocks ...uccessful. 00:06:36.455 passed 00:06:36.455 Test: blockdev write read size > 128k ...passed 00:06:36.455 Test: blockdev write read invalid size ...passed 00:06:36.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.455 Test: blockdev write read max offset ...passed 00:06:36.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.455 Test: blockdev writev readv 8 blocks ...passed 00:06:36.455 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.455 Test: blockdev writev readv block ...passed 00:06:36.455 Test: blockdev writev readv size > 128k ...passed 00:06:36.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.455 Test: blockdev comparev and writev ...[2024-12-13 22:47:15.500480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfc38000 len:0x1000 00:06:36.455 passed 00:06:36.455 Test: blockdev nvme passthru rw ...[2024-12-13 22:47:15.500691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:36.455 passed 00:06:36.455 Test: blockdev nvme passthru vendor specific ...[2024-12-13 22:47:15.502672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:36.455 passed 00:06:36.455 Test: blockdev nvme admin passthru ...[2024-12-13 22:47:15.503061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:36.455 passed 00:06:36.455 Test: blockdev copy ...passed 00:06:36.455 Suite: bdevio tests on: Nvme2n1 00:06:36.455 Test: blockdev write read block ...passed 00:06:36.455 Test: blockdev write zeroes read block ...passed 00:06:36.455 Test: blockdev write zeroes read no split ...passed 00:06:36.455 Test: blockdev write zeroes read split ...passed 00:06:36.455 Test: blockdev write zeroes read split partial ...passed 00:06:36.455 Test: blockdev reset ...[2024-12-13 22:47:15.568924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:36.455 [2024-12-13 22:47:15.572445] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:36.455 Test: blockdev write read 8 blocks ...uccessful. 00:06:36.455 passed 00:06:36.455 Test: blockdev write read size > 128k ...passed 00:06:36.455 Test: blockdev write read invalid size ...passed 00:06:36.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.455 Test: blockdev write read max offset ...passed 00:06:36.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.455 Test: blockdev writev readv 8 blocks ...passed 00:06:36.455 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.455 Test: blockdev writev readv block ...passed 00:06:36.455 Test: blockdev writev readv size > 128k ...passed 00:06:36.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.455 Test: blockdev comparev and writev ...[2024-12-13 22:47:15.587475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfc34000 len:0x1000 00:06:36.455 [2024-12-13 22:47:15.587526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:36.455 passed 00:06:36.455 Test: blockdev nvme passthru rw ...passed 00:06:36.455 Test: blockdev nvme passthru vendor specific ...passed 00:06:36.455 Test: blockdev nvme admin passthru ...[2024-12-13 22:47:15.588259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:36.455 [2024-12-13 22:47:15.588284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:36.455 passed 00:06:36.455 Test: blockdev copy ...passed 00:06:36.455 Suite: bdevio tests on: Nvme1n1p2 00:06:36.455 Test: blockdev write read block ...passed 00:06:36.715 Test: blockdev write zeroes read block ...passed 00:06:36.715 Test: blockdev write zeroes read no split ...passed 00:06:36.715 Test: blockdev write zeroes read split ...passed 00:06:36.715 Test: blockdev write zeroes read split partial ...passed 00:06:36.715 Test: blockdev reset ...[2024-12-13 22:47:15.643142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:36.715 [2024-12-13 22:47:15.645742] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:36.715 passed 00:06:36.715 Test: blockdev write read 8 blocks ...passed 00:06:36.715 Test: blockdev write read size > 128k ...passed 00:06:36.716 Test: blockdev write read invalid size ...passed 00:06:36.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.716 Test: blockdev write read max offset ...passed 00:06:36.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.716 Test: blockdev writev readv 8 blocks ...passed 00:06:36.716 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.716 Test: blockdev writev readv block ...passed 00:06:36.716 Test: blockdev writev readv size > 128k ...passed 00:06:36.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.716 Test: blockdev comparev and writev ...[2024-12-13 22:47:15.663448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dfc30000 len:0x1000 00:06:36.716 [2024-12-13 22:47:15.663494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:36.716 passed 00:06:36.716 Test: blockdev nvme passthru rw ...passed 00:06:36.716 Test: blockdev nvme passthru vendor specific ...passed 00:06:36.716 Test: blockdev nvme admin passthru ...passed 00:06:36.716 Test: blockdev copy ...passed 00:06:36.716 Suite: bdevio tests on: Nvme1n1p1 00:06:36.716 Test: blockdev write read block ...passed 00:06:36.716 Test: blockdev write zeroes read block ...passed 00:06:36.716 Test: blockdev write zeroes read no split ...passed 00:06:36.716 Test: blockdev write zeroes read split ...passed 00:06:36.716 Test: blockdev write zeroes read split partial ...passed 00:06:36.716 Test: blockdev reset ...[2024-12-13 22:47:15.713142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:36.716 [2024-12-13 22:47:15.716513] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:36.716 Test: blockdev write read 8 blocks ...uccessful. 00:06:36.716 passed 00:06:36.716 Test: blockdev write read size > 128k ...passed 00:06:36.716 Test: blockdev write read invalid size ...passed 00:06:36.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.716 Test: blockdev write read max offset ...passed 00:06:36.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.716 Test: blockdev writev readv 8 blocks ...passed 00:06:36.716 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.716 Test: blockdev writev readv block ...passed 00:06:36.716 Test: blockdev writev readv size > 128k ...passed 00:06:36.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.716 Test: blockdev comparev and writev ...[2024-12-13 22:47:15.733939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:06:36.716 Test: blockdev nvme passthru rw ...passed 00:06:36.716 Test: blockdev nvme passthru vendor specific ...passed 00:06:36.716 Test: blockdev nvme admin passthru ...passed 00:06:36.716 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x2b9a0e000 len:0x1000 00:06:36.716 [2024-12-13 22:47:15.734071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:36.716 passed 00:06:36.716 Suite: bdevio tests on: Nvme0n1 00:06:36.716 Test: blockdev write read block ...passed 00:06:36.716 Test: blockdev write zeroes read block ...passed 00:06:36.716 Test: blockdev write zeroes read no split ...passed 00:06:36.716 Test: blockdev write zeroes read split ...passed 00:06:36.716 Test: blockdev write zeroes read split partial ...passed 00:06:36.716 Test: blockdev reset ...[2024-12-13 22:47:15.785818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:36.716 [2024-12-13 22:47:15.788343] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:36.716 passed 00:06:36.716 Test: blockdev write read 8 blocks ...passed 00:06:36.716 Test: blockdev write read size > 128k ...passed 00:06:36.716 Test: blockdev write read invalid size ...passed 00:06:36.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.716 Test: blockdev write read max offset ...passed 00:06:36.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.716 Test: blockdev writev readv 8 blocks ...passed 00:06:36.716 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.716 Test: blockdev writev readv block ...passed 00:06:36.716 Test: blockdev writev readv size > 128k ...passed 00:06:36.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.716 Test: blockdev comparev and writev ...passed 00:06:36.716 Test: blockdev nvme passthru rw ...[2024-12-13 22:47:15.802786] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:36.716 separate metadata which is not supported yet. 00:06:36.716 passed 00:06:36.716 Test: blockdev nvme passthru vendor specific ...passed 00:06:36.716 Test: blockdev nvme admin passthru ...[2024-12-13 22:47:15.804097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:36.716 [2024-12-13 22:47:15.804138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:36.716 passed 00:06:36.716 Test: blockdev copy ...passed 00:06:36.716 00:06:36.716 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.716 suites 7 7 n/a 0 0 00:06:36.716 tests 161 161 161 0 0 00:06:36.716 asserts 1025 1025 1025 0 n/a 00:06:36.716 00:06:36.716 Elapsed time = 1.379 seconds 00:06:36.716 0 00:06:36.716 22:47:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63174 00:06:36.716 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63174 ']' 00:06:36.716 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63174 00:06:36.716 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:36.716 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.716 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63174 00:06:36.716 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.716 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.716 killing process with pid 63174 00:06:36.716 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63174' 00:06:36.716 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63174 00:06:36.716 22:47:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63174 00:06:37.655 22:47:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:37.655 00:06:37.655 real 0m2.204s 00:06:37.655 user 0m5.539s 00:06:37.655 sys 0m0.282s 00:06:37.655 22:47:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:37.656 ************************************ 00:06:37.656 END TEST bdev_bounds 00:06:37.656 ************************************ 00:06:37.656 22:47:16 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:37.656 22:47:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:37.656 22:47:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.656 22:47:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:37.656 ************************************ 00:06:37.656 START TEST bdev_nbd 00:06:37.656 ************************************ 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63228 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63228 /var/tmp/spdk-nbd.sock 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63228 ']' 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:37.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.656 22:47:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:37.656 [2024-12-13 22:47:16.567874] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:37.656 [2024-12-13 22:47:16.568117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.656 [2024-12-13 22:47:16.721316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.917 [2024-12-13 22:47:16.818980] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:38.489 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.750 1+0 records in 00:06:38.750 1+0 records out 00:06:38.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126158 s, 3.2 MB/s 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:38.750 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.011 1+0 records in 00:06:39.011 1+0 records out 00:06:39.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000878627 s, 4.7 MB/s 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:39.011 22:47:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:39.011 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:39.011 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:39.011 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:39.011 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:39.011 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:39.011 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.011 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.011 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.272 1+0 records in 00:06:39.272 1+0 records out 00:06:39.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745276 s, 5.5 MB/s 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.272 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.272 1+0 records in 00:06:39.272 1+0 records out 00:06:39.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000972707 s, 4.2 MB/s 00:06:39.273 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.273 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:39.273 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.273 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.273 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:39.273 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:39.273 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:39.273 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.534 1+0 records in 00:06:39.534 1+0 records out 00:06:39.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107068 s, 3.8 MB/s 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:39.534 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.796 1+0 records in 00:06:39.796 1+0 records out 00:06:39.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766254 s, 5.3 MB/s 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:39.796 22:47:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:40.056 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:40.056 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:40.056 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:40.056 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:40.056 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:40.056 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.057 1+0 records in 00:06:40.057 1+0 records out 00:06:40.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000939441 s, 4.4 MB/s 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:40.057 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.318 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd0", 00:06:40.318 "bdev_name": "Nvme0n1" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd1", 00:06:40.318 "bdev_name": "Nvme1n1p1" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd2", 00:06:40.318 "bdev_name": "Nvme1n1p2" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd3", 00:06:40.318 "bdev_name": "Nvme2n1" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd4", 00:06:40.318 "bdev_name": "Nvme2n2" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd5", 00:06:40.318 "bdev_name": "Nvme2n3" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd6", 00:06:40.318 "bdev_name": "Nvme3n1" 00:06:40.318 } 00:06:40.318 ]' 00:06:40.318 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:40.318 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:40.318 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd0", 00:06:40.318 "bdev_name": "Nvme0n1" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd1", 00:06:40.318 "bdev_name": "Nvme1n1p1" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd2", 00:06:40.318 "bdev_name": "Nvme1n1p2" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd3", 00:06:40.318 "bdev_name": "Nvme2n1" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd4", 00:06:40.318 "bdev_name": "Nvme2n2" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd5", 00:06:40.318 "bdev_name": "Nvme2n3" 00:06:40.318 }, 00:06:40.318 { 00:06:40.318 "nbd_device": "/dev/nbd6", 00:06:40.318 "bdev_name": "Nvme3n1" 00:06:40.318 } 00:06:40.318 ]' 00:06:40.318 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:40.318 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.318 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:40.318 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.318 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:40.318 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.318 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.580 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.580 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.580 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.580 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.580 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.580 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.580 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.580 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.580 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.580 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.839 22:47:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:41.098 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:41.098 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:41.098 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:41.098 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.098 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.098 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:41.098 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.098 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.098 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.098 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:41.359 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:41.359 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:41.359 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:41.359 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.359 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.359 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:41.359 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.359 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.359 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.359 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:41.620 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:41.620 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:41.620 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:41.620 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.620 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.620 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:41.620 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.620 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.620 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.620 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:41.882 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:41.882 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:41.882 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:41.882 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.882 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.882 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:41.882 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.882 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.882 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.882 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.882 22:47:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.882 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.882 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.882 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.142 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.142 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.142 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:42.143 /dev/nbd0 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.143 1+0 records in 00:06:42.143 1+0 records out 00:06:42.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102298 s, 4.0 MB/s 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.143 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:42.405 /dev/nbd1 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.405 1+0 records in 00:06:42.405 1+0 records out 00:06:42.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701941 s, 5.8 MB/s 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.405 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:42.666 /dev/nbd10 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.666 1+0 records in 00:06:42.666 1+0 records out 00:06:42.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105635 s, 3.9 MB/s 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:42.666 22:47:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:42.934 /dev/nbd11 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.934 1+0 records in 00:06:42.934 1+0 records out 00:06:42.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129494 s, 3.2 MB/s 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:42.934 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:43.205 /dev/nbd12 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.205 1+0 records in 00:06:43.205 1+0 records out 00:06:43.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000976321 s, 4.2 MB/s 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.205 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:43.206 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:43.465 /dev/nbd13 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.465 1+0 records in 00:06:43.465 1+0 records out 00:06:43.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458952 s, 8.9 MB/s 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:43.465 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:43.723 /dev/nbd14 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.723 1+0 records in 00:06:43.723 1+0 records out 00:06:43.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665692 s, 6.2 MB/s 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.723 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd0", 00:06:43.984 "bdev_name": "Nvme0n1" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd1", 00:06:43.984 "bdev_name": "Nvme1n1p1" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd10", 00:06:43.984 "bdev_name": "Nvme1n1p2" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd11", 00:06:43.984 "bdev_name": "Nvme2n1" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd12", 00:06:43.984 "bdev_name": "Nvme2n2" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd13", 00:06:43.984 "bdev_name": "Nvme2n3" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd14", 00:06:43.984 "bdev_name": "Nvme3n1" 00:06:43.984 } 00:06:43.984 ]' 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd0", 00:06:43.984 "bdev_name": "Nvme0n1" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd1", 00:06:43.984 "bdev_name": "Nvme1n1p1" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd10", 00:06:43.984 "bdev_name": "Nvme1n1p2" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd11", 00:06:43.984 "bdev_name": "Nvme2n1" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd12", 00:06:43.984 "bdev_name": "Nvme2n2" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd13", 00:06:43.984 "bdev_name": "Nvme2n3" 00:06:43.984 }, 00:06:43.984 { 00:06:43.984 "nbd_device": "/dev/nbd14", 00:06:43.984 "bdev_name": "Nvme3n1" 00:06:43.984 } 00:06:43.984 ]' 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.984 /dev/nbd1 00:06:43.984 /dev/nbd10 00:06:43.984 /dev/nbd11 00:06:43.984 /dev/nbd12 00:06:43.984 /dev/nbd13 00:06:43.984 /dev/nbd14' 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.984 /dev/nbd1 00:06:43.984 /dev/nbd10 00:06:43.984 /dev/nbd11 00:06:43.984 /dev/nbd12 00:06:43.984 /dev/nbd13 00:06:43.984 /dev/nbd14' 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:43.984 256+0 records in 00:06:43.984 256+0 records out 00:06:43.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00803192 s, 131 MB/s 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.984 22:47:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.243 256+0 records in 00:06:44.243 256+0 records out 00:06:44.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.207602 s, 5.1 MB/s 00:06:44.243 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.243 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.243 256+0 records in 00:06:44.243 256+0 records out 00:06:44.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0834754 s, 12.6 MB/s 00:06:44.243 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.243 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:44.243 256+0 records in 00:06:44.243 256+0 records out 00:06:44.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0815945 s, 12.9 MB/s 00:06:44.243 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.243 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:44.501 256+0 records in 00:06:44.501 256+0 records out 00:06:44.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0800323 s, 13.1 MB/s 00:06:44.501 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.501 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:44.501 256+0 records in 00:06:44.501 256+0 records out 00:06:44.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0768685 s, 13.6 MB/s 00:06:44.501 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.501 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:44.501 256+0 records in 00:06:44.501 256+0 records out 00:06:44.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0775308 s, 13.5 MB/s 00:06:44.501 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.501 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:44.759 256+0 records in 00:06:44.759 256+0 records out 00:06:44.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0809872 s, 12.9 MB/s 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.759 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.019 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.019 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.019 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.019 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.019 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.019 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.019 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.019 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.019 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.019 22:47:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.279 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:45.280 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.280 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.280 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.280 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:45.538 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:45.538 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:45.538 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:45.538 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.538 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.538 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:45.538 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.538 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.538 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.538 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:45.796 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:45.796 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:45.796 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:45.796 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.796 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.796 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:45.796 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.796 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.796 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.796 22:47:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:46.055 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:46.055 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:46.055 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:46.055 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.055 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.055 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:46.055 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.055 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.055 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.055 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.312 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:46.570 malloc_lvol_verify 00:06:46.570 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:46.830 cc2b916a-f058-4ba8-a110-cd00a6c0363a 00:06:46.830 22:47:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:47.088 aaa1fd66-39a1-4673-b83f-f32cb13b1468 00:06:47.088 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:47.347 /dev/nbd0 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:47.347 mke2fs 1.47.0 (5-Feb-2023) 00:06:47.347 Discarding device blocks: 0/4096 done 00:06:47.347 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:47.347 00:06:47.347 Allocating group tables: 0/1 done 00:06:47.347 Writing inode tables: 0/1 done 00:06:47.347 Creating journal (1024 blocks): done 00:06:47.347 Writing superblocks and filesystem accounting information: 0/1 done 00:06:47.347 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.347 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63228 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63228 ']' 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63228 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63228 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63228' 00:06:47.605 killing process with pid 63228 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63228 00:06:47.605 22:47:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63228 00:06:48.540 22:47:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:48.540 00:06:48.540 real 0m10.817s 00:06:48.540 user 0m15.374s 00:06:48.540 sys 0m3.497s 00:06:48.540 22:47:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.540 22:47:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:48.540 ************************************ 00:06:48.540 END TEST bdev_nbd 00:06:48.540 ************************************ 00:06:48.540 22:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:48.540 22:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:06:48.540 22:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:06:48.540 skipping fio tests on NVMe due to multi-ns failures. 00:06:48.540 22:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:48.540 22:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:48.540 22:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:48.540 22:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:48.540 22:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.540 22:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.540 ************************************ 00:06:48.540 START TEST bdev_verify 00:06:48.540 ************************************ 00:06:48.540 22:47:27 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:48.540 [2024-12-13 22:47:27.437017] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:48.540 [2024-12-13 22:47:27.437145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63645 ] 00:06:48.540 [2024-12-13 22:47:27.598781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.799 [2024-12-13 22:47:27.717707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.799 [2024-12-13 22:47:27.717785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.365 Running I/O for 5 seconds... 00:06:51.686 25152.00 IOPS, 98.25 MiB/s [2024-12-13T22:47:31.768Z] 23808.00 IOPS, 93.00 MiB/s [2024-12-13T22:47:32.710Z] 22613.33 IOPS, 88.33 MiB/s [2024-12-13T22:47:33.650Z] 22592.00 IOPS, 88.25 MiB/s [2024-12-13T22:47:33.650Z] 22528.00 IOPS, 88.00 MiB/s 00:06:54.510 Latency(us) 00:06:54.510 [2024-12-13T22:47:33.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.510 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x0 length 0xbd0bd 00:06:54.510 Nvme0n1 : 5.05 1672.62 6.53 0.00 0.00 76293.84 18249.26 71787.13 00:06:54.510 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:54.510 Nvme0n1 : 5.07 1503.20 5.87 0.00 0.00 84717.32 10586.58 75820.11 00:06:54.510 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x0 length 0x4ff80 00:06:54.510 Nvme1n1p1 : 5.05 1672.13 6.53 0.00 0.00 76203.10 19459.15 64931.05 00:06:54.510 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:54.510 Nvme1n1p1 : 5.08 1510.35 5.90 0.00 0.00 84389.14 12905.55 69367.34 00:06:54.510 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x0 length 0x4ff7f 00:06:54.510 Nvme1n1p2 : 5.05 1671.65 6.53 0.00 0.00 76086.64 19761.62 59688.17 00:06:54.510 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:54.510 Nvme1n1p2 : 5.09 1509.91 5.90 0.00 0.00 84288.48 12703.90 67350.84 00:06:54.510 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x0 length 0x80000 00:06:54.510 Nvme2n1 : 5.06 1671.16 6.53 0.00 0.00 75994.45 19055.85 57268.38 00:06:54.510 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x80000 length 0x80000 00:06:54.510 Nvme2n1 : 5.09 1509.04 5.89 0.00 0.00 84143.45 13712.15 67754.14 00:06:54.510 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x0 length 0x80000 00:06:54.510 Nvme2n2 : 5.07 1679.20 6.56 0.00 0.00 75545.87 4915.20 58074.98 00:06:54.510 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x80000 length 0x80000 00:06:54.510 Nvme2n2 : 5.09 1508.64 5.89 0.00 0.00 83997.01 14014.62 70173.93 00:06:54.510 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x0 length 0x80000 00:06:54.510 Nvme2n3 : 5.07 1678.71 6.56 0.00 0.00 75431.24 5041.23 60091.47 00:06:54.510 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x80000 length 0x80000 00:06:54.510 Nvme2n3 : 5.09 1508.25 5.89 0.00 0.00 83866.79 13107.20 72190.42 00:06:54.510 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x0 length 0x20000 00:06:54.510 Nvme3n1 : 5.08 1687.49 6.59 0.00 0.00 74973.64 8721.33 60494.77 00:06:54.510 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:54.510 Verification LBA range: start 0x20000 length 0x20000 00:06:54.510 Nvme3n1 : 5.09 1507.86 5.89 0.00 0.00 83754.18 10132.87 74206.92 00:06:54.510 [2024-12-13T22:47:33.650Z] =================================================================================================================== 00:06:54.510 [2024-12-13T22:47:33.650Z] Total : 22290.22 87.07 0.00 0.00 79765.01 4915.20 75820.11 00:06:55.894 00:06:55.894 real 0m7.381s 00:06:55.894 user 0m13.785s 00:06:55.894 sys 0m0.232s 00:06:55.894 22:47:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.894 22:47:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:55.894 ************************************ 00:06:55.894 END TEST bdev_verify 00:06:55.894 ************************************ 00:06:55.894 22:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:55.894 22:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:55.894 22:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.894 22:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.894 ************************************ 00:06:55.894 START TEST bdev_verify_big_io 00:06:55.894 ************************************ 00:06:55.894 22:47:34 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:55.894 [2024-12-13 22:47:34.866223] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:55.894 [2024-12-13 22:47:34.866343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63743 ] 00:06:55.894 [2024-12-13 22:47:35.026970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.153 [2024-12-13 22:47:35.148678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.153 [2024-12-13 22:47:35.148749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.095 Running I/O for 5 seconds... 00:07:02.977 854.00 IOPS, 53.38 MiB/s [2024-12-13T22:47:42.681Z] 3279.50 IOPS, 204.97 MiB/s [2024-12-13T22:47:42.681Z] 3560.00 IOPS, 222.50 MiB/s 00:07:03.541 Latency(us) 00:07:03.541 [2024-12-13T22:47:42.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.541 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.541 Verification LBA range: start 0x0 length 0xbd0b 00:07:03.542 Nvme0n1 : 5.76 128.80 8.05 0.00 0.00 957135.07 20164.92 1071160.71 00:07:03.542 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:03.542 Nvme0n1 : 5.97 67.32 4.21 0.00 0.00 1778698.08 21374.82 1742249.35 00:07:03.542 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x0 length 0x4ff8 00:07:03.542 Nvme1n1p1 : 5.76 129.65 8.10 0.00 0.00 926942.37 86709.17 929199.66 00:07:03.542 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:03.542 Nvme1n1p1 : 6.01 74.01 4.63 0.00 0.00 1593439.25 38716.65 1819682.66 00:07:03.542 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x0 length 0x4ff7 00:07:03.542 Nvme1n1p2 : 5.76 133.24 8.33 0.00 0.00 888298.47 84692.68 774333.05 00:07:03.542 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:03.542 Nvme1n1p2 : 6.01 74.49 4.66 0.00 0.00 1510417.55 39523.25 1858399.31 00:07:03.542 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x0 length 0x8000 00:07:03.542 Nvme2n1 : 5.83 131.46 8.22 0.00 0.00 866748.15 85902.57 1193763.45 00:07:03.542 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x8000 length 0x8000 00:07:03.542 Nvme2n1 : 6.07 84.39 5.27 0.00 0.00 1271290.49 26416.05 1884210.41 00:07:03.542 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x0 length 0x8000 00:07:03.542 Nvme2n2 : 5.83 136.88 8.55 0.00 0.00 819681.38 62107.96 987274.63 00:07:03.542 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x8000 length 0x8000 00:07:03.542 Nvme2n2 : 6.16 103.93 6.50 0.00 0.00 999945.92 14720.39 1529307.77 00:07:03.542 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x0 length 0x8000 00:07:03.542 Nvme2n3 : 5.93 146.95 9.18 0.00 0.00 742903.55 28029.24 987274.63 00:07:03.542 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x8000 length 0x8000 00:07:03.542 Nvme2n3 : 6.31 146.14 9.13 0.00 0.00 682096.62 10334.52 1768060.46 00:07:03.542 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x0 length 0x2000 00:07:03.542 Nvme3n1 : 5.98 167.19 10.45 0.00 0.00 638966.45 3806.13 1000180.18 00:07:03.542 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:03.542 Verification LBA range: start 0x2000 length 0x2000 00:07:03.542 Nvme3n1 : 6.52 268.07 16.75 0.00 0.00 359074.37 790.84 1910021.51 00:07:03.542 [2024-12-13T22:47:42.682Z] =================================================================================================================== 00:07:03.542 [2024-12-13T22:47:42.682Z] Total : 1792.52 112.03 0.00 0.00 863873.98 790.84 1910021.51 00:07:04.923 00:07:04.923 real 0m9.159s 00:07:04.923 user 0m16.771s 00:07:04.923 sys 0m0.284s 00:07:04.923 22:47:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.923 22:47:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:04.923 ************************************ 00:07:04.923 END TEST bdev_verify_big_io 00:07:04.923 ************************************ 00:07:04.924 22:47:44 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:04.924 22:47:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:04.924 22:47:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.924 22:47:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:04.924 ************************************ 00:07:04.924 START TEST bdev_write_zeroes 00:07:04.924 ************************************ 00:07:04.924 22:47:44 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:05.184 [2024-12-13 22:47:44.084957] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:05.184 [2024-12-13 22:47:44.085089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63859 ] 00:07:05.184 [2024-12-13 22:47:44.241446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.444 [2024-12-13 22:47:44.339177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.014 Running I/O for 1 seconds... 00:07:06.946 46848.00 IOPS, 183.00 MiB/s 00:07:06.946 Latency(us) 00:07:06.946 [2024-12-13T22:47:46.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.946 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.946 Nvme0n1 : 1.02 6682.38 26.10 0.00 0.00 19113.59 8519.68 358129.03 00:07:06.946 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.946 Nvme1n1p1 : 1.03 6798.96 26.56 0.00 0.00 18767.22 8570.09 348449.87 00:07:06.946 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.946 Nvme1n1p2 : 1.03 6790.54 26.53 0.00 0.00 18753.56 7410.61 353289.45 00:07:06.946 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.946 Nvme2n1 : 1.03 6658.41 26.01 0.00 0.00 19098.88 11544.42 356515.84 00:07:06.946 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.946 Nvme2n2 : 1.03 6650.89 25.98 0.00 0.00 19069.60 11544.42 354902.65 00:07:06.946 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.946 Nvme2n3 : 1.03 6643.33 25.95 0.00 0.00 19071.87 11544.42 354902.65 00:07:06.946 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:06.946 Nvme3n1 : 1.03 6635.76 25.92 0.00 0.00 19056.19 10485.76 354902.65 00:07:06.946 [2024-12-13T22:47:46.086Z] =================================================================================================================== 00:07:06.946 [2024-12-13T22:47:46.086Z] Total : 46860.27 183.05 0.00 0.00 18988.91 7410.61 358129.03 00:07:07.881 00:07:07.881 real 0m2.692s 00:07:07.881 user 0m2.389s 00:07:07.881 sys 0m0.192s 00:07:07.881 22:47:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.881 ************************************ 00:07:07.881 END TEST bdev_write_zeroes 00:07:07.881 ************************************ 00:07:07.881 22:47:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:07.881 22:47:46 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:07.881 22:47:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:07.881 22:47:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.881 22:47:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.881 ************************************ 00:07:07.881 START TEST bdev_json_nonenclosed 00:07:07.881 ************************************ 00:07:07.881 22:47:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:07.881 [2024-12-13 22:47:46.817210] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:07.881 [2024-12-13 22:47:46.817350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63912 ] 00:07:07.881 [2024-12-13 22:47:46.978337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.138 [2024-12-13 22:47:47.090301] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.138 [2024-12-13 22:47:47.090391] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:08.138 [2024-12-13 22:47:47.090409] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:08.138 [2024-12-13 22:47:47.090419] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.398 00:07:08.398 real 0m0.526s 00:07:08.398 user 0m0.322s 00:07:08.398 sys 0m0.100s 00:07:08.398 22:47:47 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.398 22:47:47 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:08.398 ************************************ 00:07:08.398 END TEST bdev_json_nonenclosed 00:07:08.398 ************************************ 00:07:08.398 22:47:47 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.398 22:47:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:08.398 22:47:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.398 22:47:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.398 ************************************ 00:07:08.398 START TEST bdev_json_nonarray 00:07:08.398 ************************************ 00:07:08.398 22:47:47 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.398 [2024-12-13 22:47:47.381468] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:08.398 [2024-12-13 22:47:47.381602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63932 ] 00:07:08.660 [2024-12-13 22:47:47.542880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.660 [2024-12-13 22:47:47.656670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.660 [2024-12-13 22:47:47.656783] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:08.660 [2024-12-13 22:47:47.656803] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:08.660 [2024-12-13 22:47:47.656814] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.921 00:07:08.921 real 0m0.530s 00:07:08.921 user 0m0.329s 00:07:08.921 sys 0m0.097s 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.921 ************************************ 00:07:08.921 END TEST bdev_json_nonarray 00:07:08.921 ************************************ 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:08.921 22:47:47 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:08.921 22:47:47 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:08.921 22:47:47 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:08.921 22:47:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.921 22:47:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.921 22:47:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.921 ************************************ 00:07:08.921 START TEST bdev_gpt_uuid 00:07:08.921 ************************************ 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63963 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63963 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63963 ']' 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:08.921 22:47:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:08.921 [2024-12-13 22:47:47.985305] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:08.921 [2024-12-13 22:47:47.985437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63963 ] 00:07:09.181 [2024-12-13 22:47:48.146652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.181 [2024-12-13 22:47:48.263504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.123 22:47:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.123 22:47:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:10.123 22:47:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:10.123 22:47:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.123 22:47:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:10.123 Some configs were skipped because the RPC state that can call them passed over. 00:07:10.123 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.123 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:10.123 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.123 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:10.123 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.123 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:10.123 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.123 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:10.401 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.401 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:10.401 { 00:07:10.402 "name": "Nvme1n1p1", 00:07:10.402 "aliases": [ 00:07:10.402 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:10.402 ], 00:07:10.402 "product_name": "GPT Disk", 00:07:10.402 "block_size": 4096, 00:07:10.402 "num_blocks": 655104, 00:07:10.402 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:10.402 "assigned_rate_limits": { 00:07:10.402 "rw_ios_per_sec": 0, 00:07:10.402 "rw_mbytes_per_sec": 0, 00:07:10.402 "r_mbytes_per_sec": 0, 00:07:10.402 "w_mbytes_per_sec": 0 00:07:10.402 }, 00:07:10.402 "claimed": false, 00:07:10.402 "zoned": false, 00:07:10.402 "supported_io_types": { 00:07:10.402 "read": true, 00:07:10.402 "write": true, 00:07:10.402 "unmap": true, 00:07:10.402 "flush": true, 00:07:10.402 "reset": true, 00:07:10.402 "nvme_admin": false, 00:07:10.402 "nvme_io": false, 00:07:10.402 "nvme_io_md": false, 00:07:10.402 "write_zeroes": true, 00:07:10.402 "zcopy": false, 00:07:10.402 "get_zone_info": false, 00:07:10.402 "zone_management": false, 00:07:10.402 "zone_append": false, 00:07:10.402 "compare": true, 00:07:10.402 "compare_and_write": false, 00:07:10.402 "abort": true, 00:07:10.402 "seek_hole": false, 00:07:10.402 "seek_data": false, 00:07:10.402 "copy": true, 00:07:10.402 "nvme_iov_md": false 00:07:10.402 }, 00:07:10.402 "driver_specific": { 00:07:10.402 "gpt": { 00:07:10.402 "base_bdev": "Nvme1n1", 00:07:10.402 "offset_blocks": 256, 00:07:10.402 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:10.402 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:10.402 "partition_name": "SPDK_TEST_first" 00:07:10.402 } 00:07:10.402 } 00:07:10.402 } 00:07:10.402 ]' 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:10.402 { 00:07:10.402 "name": "Nvme1n1p2", 00:07:10.402 "aliases": [ 00:07:10.402 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:10.402 ], 00:07:10.402 "product_name": "GPT Disk", 00:07:10.402 "block_size": 4096, 00:07:10.402 "num_blocks": 655103, 00:07:10.402 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:10.402 "assigned_rate_limits": { 00:07:10.402 "rw_ios_per_sec": 0, 00:07:10.402 "rw_mbytes_per_sec": 0, 00:07:10.402 "r_mbytes_per_sec": 0, 00:07:10.402 "w_mbytes_per_sec": 0 00:07:10.402 }, 00:07:10.402 "claimed": false, 00:07:10.402 "zoned": false, 00:07:10.402 "supported_io_types": { 00:07:10.402 "read": true, 00:07:10.402 "write": true, 00:07:10.402 "unmap": true, 00:07:10.402 "flush": true, 00:07:10.402 "reset": true, 00:07:10.402 "nvme_admin": false, 00:07:10.402 "nvme_io": false, 00:07:10.402 "nvme_io_md": false, 00:07:10.402 "write_zeroes": true, 00:07:10.402 "zcopy": false, 00:07:10.402 "get_zone_info": false, 00:07:10.402 "zone_management": false, 00:07:10.402 "zone_append": false, 00:07:10.402 "compare": true, 00:07:10.402 "compare_and_write": false, 00:07:10.402 "abort": true, 00:07:10.402 "seek_hole": false, 00:07:10.402 "seek_data": false, 00:07:10.402 "copy": true, 00:07:10.402 "nvme_iov_md": false 00:07:10.402 }, 00:07:10.402 "driver_specific": { 00:07:10.402 "gpt": { 00:07:10.402 "base_bdev": "Nvme1n1", 00:07:10.402 "offset_blocks": 655360, 00:07:10.402 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:10.402 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:10.402 "partition_name": "SPDK_TEST_second" 00:07:10.402 } 00:07:10.402 } 00:07:10.402 } 00:07:10.402 ]' 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63963 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63963 ']' 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63963 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63963 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.402 killing process with pid 63963 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63963' 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63963 00:07:10.402 22:47:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63963 00:07:12.318 00:07:12.319 real 0m3.081s 00:07:12.319 user 0m3.144s 00:07:12.319 sys 0m0.437s 00:07:12.319 22:47:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.319 22:47:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:12.319 ************************************ 00:07:12.319 END TEST bdev_gpt_uuid 00:07:12.319 ************************************ 00:07:12.319 22:47:51 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:12.319 22:47:51 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:12.319 22:47:51 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:12.319 22:47:51 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:12.319 22:47:51 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:12.319 22:47:51 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:12.319 22:47:51 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:12.319 22:47:51 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:12.319 22:47:51 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:12.319 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.579 Waiting for block devices as requested 00:07:12.579 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:12.579 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:12.579 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:12.840 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:18.174 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:18.174 22:47:56 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:18.174 22:47:56 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:18.174 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:18.174 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:18.174 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:18.174 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:18.174 22:47:57 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:18.174 00:07:18.174 real 0m56.687s 00:07:18.174 user 1m11.896s 00:07:18.174 sys 0m8.067s 00:07:18.174 22:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.174 22:47:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.174 ************************************ 00:07:18.174 END TEST blockdev_nvme_gpt 00:07:18.174 ************************************ 00:07:18.174 22:47:57 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:18.174 22:47:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.174 22:47:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.174 22:47:57 -- common/autotest_common.sh@10 -- # set +x 00:07:18.174 ************************************ 00:07:18.174 START TEST nvme 00:07:18.174 ************************************ 00:07:18.174 22:47:57 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:18.174 * Looking for test storage... 00:07:18.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:18.174 22:47:57 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.174 22:47:57 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:18.174 22:47:57 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.174 22:47:57 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:18.174 22:47:57 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.174 22:47:57 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.174 22:47:57 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.174 22:47:57 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.174 22:47:57 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.174 22:47:57 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.174 22:47:57 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.174 22:47:57 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.174 22:47:57 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.174 22:47:57 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.174 22:47:57 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.174 22:47:57 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:18.174 22:47:57 nvme -- scripts/common.sh@345 -- # : 1 00:07:18.432 22:47:57 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.432 22:47:57 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.432 22:47:57 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:18.432 22:47:57 nvme -- scripts/common.sh@353 -- # local d=1 00:07:18.432 22:47:57 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.432 22:47:57 nvme -- scripts/common.sh@355 -- # echo 1 00:07:18.432 22:47:57 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.432 22:47:57 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:18.432 22:47:57 nvme -- scripts/common.sh@353 -- # local d=2 00:07:18.432 22:47:57 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.432 22:47:57 nvme -- scripts/common.sh@355 -- # echo 2 00:07:18.432 22:47:57 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.432 22:47:57 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.432 22:47:57 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.432 22:47:57 nvme -- scripts/common.sh@368 -- # return 0 00:07:18.432 22:47:57 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.432 22:47:57 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.432 --rc genhtml_branch_coverage=1 00:07:18.432 --rc genhtml_function_coverage=1 00:07:18.432 --rc genhtml_legend=1 00:07:18.432 --rc geninfo_all_blocks=1 00:07:18.432 --rc geninfo_unexecuted_blocks=1 00:07:18.432 00:07:18.432 ' 00:07:18.432 22:47:57 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.432 --rc genhtml_branch_coverage=1 00:07:18.432 --rc genhtml_function_coverage=1 00:07:18.432 --rc genhtml_legend=1 00:07:18.432 --rc geninfo_all_blocks=1 00:07:18.432 --rc geninfo_unexecuted_blocks=1 00:07:18.432 00:07:18.432 ' 00:07:18.432 22:47:57 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.432 --rc genhtml_branch_coverage=1 00:07:18.432 --rc genhtml_function_coverage=1 00:07:18.432 --rc genhtml_legend=1 00:07:18.432 --rc geninfo_all_blocks=1 00:07:18.432 --rc geninfo_unexecuted_blocks=1 00:07:18.432 00:07:18.432 ' 00:07:18.432 22:47:57 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:18.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.432 --rc genhtml_branch_coverage=1 00:07:18.432 --rc genhtml_function_coverage=1 00:07:18.432 --rc genhtml_legend=1 00:07:18.432 --rc geninfo_all_blocks=1 00:07:18.432 --rc geninfo_unexecuted_blocks=1 00:07:18.432 00:07:18.432 ' 00:07:18.432 22:47:57 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:18.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:19.254 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:19.254 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:19.254 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:19.254 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:19.254 22:47:58 nvme -- nvme/nvme.sh@79 -- # uname 00:07:19.254 22:47:58 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:19.254 22:47:58 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:19.254 22:47:58 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:19.254 22:47:58 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:19.254 22:47:58 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:19.254 22:47:58 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:19.254 22:47:58 nvme -- common/autotest_common.sh@1075 -- # stubpid=64598 00:07:19.255 22:47:58 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:19.255 Waiting for stub to ready for secondary processes... 00:07:19.255 22:47:58 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:19.255 22:47:58 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:19.255 22:47:58 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64598 ]] 00:07:19.255 22:47:58 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:19.255 [2024-12-13 22:47:58.311173] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:19.255 [2024-12-13 22:47:58.311304] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:20.219 [2024-12-13 22:47:59.260204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.219 22:47:59 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:20.219 22:47:59 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64598 ]] 00:07:20.219 22:47:59 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:20.477 [2024-12-13 22:47:59.370384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.477 [2024-12-13 22:47:59.370535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.477 [2024-12-13 22:47:59.370544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.477 [2024-12-13 22:47:59.394801] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:20.477 [2024-12-13 22:47:59.394844] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:20.477 [2024-12-13 22:47:59.406349] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:20.477 [2024-12-13 22:47:59.406437] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:20.477 [2024-12-13 22:47:59.408482] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:20.477 [2024-12-13 22:47:59.408687] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:20.477 [2024-12-13 22:47:59.408779] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:20.477 [2024-12-13 22:47:59.411083] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:20.477 [2024-12-13 22:47:59.411277] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:20.477 [2024-12-13 22:47:59.411346] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:20.477 [2024-12-13 22:47:59.413960] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:20.477 [2024-12-13 22:47:59.414139] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:20.477 [2024-12-13 22:47:59.414205] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:20.477 [2024-12-13 22:47:59.414250] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:20.477 [2024-12-13 22:47:59.414296] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:21.410 done. 00:07:21.410 22:48:00 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:21.410 22:48:00 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:21.410 22:48:00 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:21.410 22:48:00 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:21.410 22:48:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.410 22:48:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.410 ************************************ 00:07:21.410 START TEST nvme_reset 00:07:21.410 ************************************ 00:07:21.410 22:48:00 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:21.410 Initializing NVMe Controllers 00:07:21.410 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:21.410 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:21.410 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:21.410 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:21.410 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:21.410 ************************************ 00:07:21.410 END TEST nvme_reset 00:07:21.410 ************************************ 00:07:21.410 00:07:21.410 real 0m0.227s 00:07:21.410 user 0m0.077s 00:07:21.410 sys 0m0.099s 00:07:21.410 22:48:00 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.410 22:48:00 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:21.671 22:48:00 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:21.671 22:48:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.671 22:48:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.671 22:48:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.671 ************************************ 00:07:21.671 START TEST nvme_identify 00:07:21.671 ************************************ 00:07:21.671 22:48:00 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:21.671 22:48:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:21.671 22:48:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:21.671 22:48:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:21.671 22:48:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:21.671 22:48:00 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:21.671 22:48:00 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:21.671 22:48:00 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:21.671 22:48:00 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:21.671 22:48:00 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:21.671 22:48:00 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:21.671 22:48:00 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:21.671 22:48:00 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:21.671 [2024-12-13 22:48:00.793020] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64631 terminated unexpected 00:07:21.671 ===================================================== 00:07:21.671 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:21.671 ===================================================== 00:07:21.671 Controller Capabilities/Features 00:07:21.671 ================================ 00:07:21.671 Vendor ID: 1b36 00:07:21.671 Subsystem Vendor ID: 1af4 00:07:21.671 Serial Number: 12343 00:07:21.671 Model Number: QEMU NVMe Ctrl 00:07:21.672 Firmware Version: 8.0.0 00:07:21.672 Recommended Arb Burst: 6 00:07:21.672 IEEE OUI Identifier: 00 54 52 00:07:21.672 Multi-path I/O 00:07:21.672 May have multiple subsystem ports: No 00:07:21.672 May have multiple controllers: Yes 00:07:21.672 Associated with SR-IOV VF: No 00:07:21.672 Max Data Transfer Size: 524288 00:07:21.672 Max Number of Namespaces: 256 00:07:21.672 Max Number of I/O Queues: 64 00:07:21.672 NVMe Specification Version (VS): 1.4 00:07:21.672 NVMe Specification Version (Identify): 1.4 00:07:21.672 Maximum Queue Entries: 2048 00:07:21.672 Contiguous Queues Required: Yes 00:07:21.672 Arbitration Mechanisms Supported 00:07:21.672 Weighted Round Robin: Not Supported 00:07:21.672 Vendor Specific: Not Supported 00:07:21.672 Reset Timeout: 7500 ms 00:07:21.672 Doorbell Stride: 4 bytes 00:07:21.672 NVM Subsystem Reset: Not Supported 00:07:21.672 Command Sets Supported 00:07:21.672 NVM Command Set: Supported 00:07:21.672 Boot Partition: Not Supported 00:07:21.672 Memory Page Size Minimum: 4096 bytes 00:07:21.672 Memory Page Size Maximum: 65536 bytes 00:07:21.672 Persistent Memory Region: Not Supported 00:07:21.672 Optional Asynchronous Events Supported 00:07:21.672 Namespace Attribute Notices: Supported 00:07:21.672 Firmware Activation Notices: Not Supported 00:07:21.672 ANA Change Notices: Not Supported 00:07:21.672 PLE Aggregate Log Change Notices: Not Supported 00:07:21.672 LBA Status Info Alert Notices: Not Supported 00:07:21.672 EGE Aggregate Log Change Notices: Not Supported 00:07:21.672 Normal NVM Subsystem Shutdown event: Not Supported 00:07:21.672 Zone Descriptor Change Notices: Not Supported 00:07:21.672 Discovery Log Change Notices: Not Supported 00:07:21.672 Controller Attributes 00:07:21.672 128-bit Host Identifier: Not Supported 00:07:21.672 Non-Operational Permissive Mode: Not Supported 00:07:21.672 NVM Sets: Not Supported 00:07:21.672 Read Recovery Levels: Not Supported 00:07:21.672 Endurance Groups: Supported 00:07:21.672 Predictable Latency Mode: Not Supported 00:07:21.672 Traffic Based Keep ALive: Not Supported 00:07:21.672 Namespace Granularity: Not Supported 00:07:21.672 SQ Associations: Not Supported 00:07:21.672 UUID List: Not Supported 00:07:21.672 Multi-Domain Subsystem: Not Supported 00:07:21.672 Fixed Capacity Management: Not Supported 00:07:21.672 Variable Capacity Management: Not Supported 00:07:21.672 Delete Endurance Group: Not Supported 00:07:21.672 Delete NVM Set: Not Supported 00:07:21.672 Extended LBA Formats Supported: Supported 00:07:21.672 Flexible Data Placement Supported: Supported 00:07:21.672 00:07:21.672 Controller Memory Buffer Support 00:07:21.672 ================================ 00:07:21.672 Supported: No 00:07:21.672 00:07:21.672 Persistent Memory Region Support 00:07:21.672 ================================ 00:07:21.672 Supported: No 00:07:21.672 00:07:21.672 Admin Command Set Attributes 00:07:21.672 ============================ 00:07:21.672 Security Send/Receive: Not Supported 00:07:21.672 Format NVM: Supported 00:07:21.672 Firmware Activate/Download: Not Supported 00:07:21.672 Namespace Management: Supported 00:07:21.672 Device Self-Test: Not Supported 00:07:21.672 Directives: Supported 00:07:21.672 NVMe-MI: Not Supported 00:07:21.672 Virtualization Management: Not Supported 00:07:21.672 Doorbell Buffer Config: Supported 00:07:21.672 Get LBA Status Capability: Not Supported 00:07:21.672 Command & Feature Lockdown Capability: Not Supported 00:07:21.672 Abort Command Limit: 4 00:07:21.672 Async Event Request Limit: 4 00:07:21.672 Number of Firmware Slots: N/A 00:07:21.672 Firmware Slot 1 Read-Only: N/A 00:07:21.672 Firmware Activation Without Reset: N/A 00:07:21.672 Multiple Update Detection Support: N/A 00:07:21.672 Firmware Update Granularity: No Information Provided 00:07:21.672 Per-Namespace SMART Log: Yes 00:07:21.672 Asymmetric Namespace Access Log Page: Not Supported 00:07:21.672 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:21.672 Command Effects Log Page: Supported 00:07:21.672 Get Log Page Extended Data: Supported 00:07:21.672 Telemetry Log Pages: Not Supported 00:07:21.672 Persistent Event Log Pages: Not Supported 00:07:21.672 Supported Log Pages Log Page: May Support 00:07:21.672 Commands Supported & Effects Log Page: Not Supported 00:07:21.672 Feature Identifiers & Effects Log Page:May Support 00:07:21.672 NVMe-MI Commands & Effects Log Page: May Support 00:07:21.672 Data Area 4 for Telemetry Log: Not Supported 00:07:21.672 Error Log Page Entries Supported: 1 00:07:21.672 Keep Alive: Not Supported 00:07:21.672 00:07:21.672 NVM Command Set Attributes 00:07:21.672 ========================== 00:07:21.672 Submission Queue Entry Size 00:07:21.672 Max: 64 00:07:21.672 Min: 64 00:07:21.672 Completion Queue Entry Size 00:07:21.672 Max: 16 00:07:21.672 Min: 16 00:07:21.672 Number of Namespaces: 256 00:07:21.672 Compare Command: Supported 00:07:21.672 Write Uncorrectable Command: Not Supported 00:07:21.672 Dataset Management Command: Supported 00:07:21.672 Write Zeroes Command: Supported 00:07:21.672 Set Features Save Field: Supported 00:07:21.672 Reservations: Not Supported 00:07:21.672 Timestamp: Supported 00:07:21.672 Copy: Supported 00:07:21.672 Volatile Write Cache: Present 00:07:21.672 Atomic Write Unit (Normal): 1 00:07:21.672 Atomic Write Unit (PFail): 1 00:07:21.672 Atomic Compare & Write Unit: 1 00:07:21.672 Fused Compare & Write: Not Supported 00:07:21.672 Scatter-Gather List 00:07:21.672 SGL Command Set: Supported 00:07:21.672 SGL Keyed: Not Supported 00:07:21.672 SGL Bit Bucket Descriptor: Not Supported 00:07:21.672 SGL Metadata Pointer: Not Supported 00:07:21.672 Oversized SGL: Not Supported 00:07:21.672 SGL Metadata Address: Not Supported 00:07:21.672 SGL Offset: Not Supported 00:07:21.672 Transport SGL Data Block: Not Supported 00:07:21.672 Replay Protected Memory Block: Not Supported 00:07:21.672 00:07:21.672 Firmware Slot Information 00:07:21.672 ========================= 00:07:21.672 Active slot: 1 00:07:21.672 Slot 1 Firmware Revision: 1.0 00:07:21.672 00:07:21.672 00:07:21.672 Commands Supported and Effects 00:07:21.672 ============================== 00:07:21.672 Admin Commands 00:07:21.672 -------------- 00:07:21.672 Delete I/O Submission Queue (00h): Supported 00:07:21.672 Create I/O Submission Queue (01h): Supported 00:07:21.672 Get Log Page (02h): Supported 00:07:21.672 Delete I/O Completion Queue (04h): Supported 00:07:21.672 Create I/O Completion Queue (05h): Supported 00:07:21.672 Identify (06h): Supported 00:07:21.672 Abort (08h): Supported 00:07:21.672 Set Features (09h): Supported 00:07:21.672 Get Features (0Ah): Supported 00:07:21.672 Asynchronous Event Request (0Ch): Supported 00:07:21.672 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:21.672 Directive Send (19h): Supported 00:07:21.672 Directive Receive (1Ah): Supported 00:07:21.672 Virtualization Management (1Ch): Supported 00:07:21.672 Doorbell Buffer Config (7Ch): Supported 00:07:21.672 Format NVM (80h): Supported LBA-Change 00:07:21.672 I/O Commands 00:07:21.672 ------------ 00:07:21.672 Flush (00h): Supported LBA-Change 00:07:21.672 Write (01h): Supported LBA-Change 00:07:21.672 Read (02h): Supported 00:07:21.672 Compare (05h): Supported 00:07:21.672 Write Zeroes (08h): Supported LBA-Change 00:07:21.672 Dataset Management (09h): Supported LBA-Change 00:07:21.672 Unknown (0Ch): Supported 00:07:21.672 Unknown (12h): Supported 00:07:21.672 Copy (19h): Supported LBA-Change 00:07:21.672 Unknown (1Dh): Supported LBA-Change 00:07:21.672 00:07:21.672 Error Log 00:07:21.672 ========= 00:07:21.672 00:07:21.672 Arbitration 00:07:21.672 =========== 00:07:21.672 Arbitration Burst: no limit 00:07:21.672 00:07:21.672 Power Management 00:07:21.672 ================ 00:07:21.672 Number of Power States: 1 00:07:21.672 Current Power State: Power State #0 00:07:21.672 Power State #0: 00:07:21.672 Max Power: 25.00 W 00:07:21.672 Non-Operational State: Operational 00:07:21.672 Entry Latency: 16 microseconds 00:07:21.672 Exit Latency: 4 microseconds 00:07:21.672 Relative Read Throughput: 0 00:07:21.672 Relative Read Latency: 0 00:07:21.672 Relative Write Throughput: 0 00:07:21.672 Relative Write Latency: 0 00:07:21.672 Idle Power: Not Reported 00:07:21.672 Active Power: Not Reported 00:07:21.672 Non-Operational Permissive Mode: Not Supported 00:07:21.672 00:07:21.672 Health Information 00:07:21.672 ================== 00:07:21.672 Critical Warnings: 00:07:21.673 Available Spare Space: OK 00:07:21.673 Temperature: OK 00:07:21.673 Device Reliability: OK 00:07:21.673 Read Only: No 00:07:21.673 Volatile Memory Backup: OK 00:07:21.673 Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.673 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:21.673 Available Spare: 0% 00:07:21.673 Available Spare Threshold: 0% 00:07:21.673 Life Percentage Used: 0% 00:07:21.673 Data Units Read: 966 00:07:21.673 Data Units Written: 896 00:07:21.673 Host Read Commands: 41168 00:07:21.673 Host Write Commands: 40591 00:07:21.673 Controller Busy Time: 0 minutes 00:07:21.673 Power Cycles: 0 00:07:21.673 Power On Hours: 0 hours 00:07:21.673 Unsafe Shutdowns: 0 00:07:21.673 Unrecoverable Media Errors: 0 00:07:21.673 Lifetime Error Log Entries: 0 00:07:21.673 Warning Temperature Time: 0 minutes 00:07:21.673 Critical Temperature Time: 0 minutes 00:07:21.673 00:07:21.673 Number of Queues 00:07:21.673 ================ 00:07:21.673 Number of I/O Submission Queues: 64 00:07:21.673 Number of I/O Completion Queues: 64 00:07:21.673 00:07:21.673 ZNS Specific Controller Data 00:07:21.673 ============================ 00:07:21.673 Zone Append Size Limit: 0 00:07:21.673 00:07:21.673 00:07:21.673 Active Namespaces 00:07:21.673 ================= 00:07:21.673 Namespace ID:1 00:07:21.673 Error Recovery Timeout: Unlimited 00:07:21.673 Command Set Identifier: NVM (00h) 00:07:21.673 Deallocate: Supported 00:07:21.673 Deallocated/Unwritten Error: Supported 00:07:21.673 Deallocated Read Value: All 0x00 00:07:21.673 Deallocate in Write Zeroes: Not Supported 00:07:21.673 Deallocated Guard Field: 0xFFFF 00:07:21.673 Flush: Supported 00:07:21.673 Reservation: Not Supported 00:07:21.673 Namespace Sharing Capabilities: Multiple Controllers 00:07:21.673 Size (in LBAs): 262144 (1GiB) 00:07:21.673 Capacity (in LBAs): 262144 (1GiB) 00:07:21.673 Utilization (in LBAs): 262144 (1GiB) 00:07:21.673 Thin Provisioning: Not Supported 00:07:21.673 Per-NS Atomic Units: No 00:07:21.673 Maximum Single Source Range Length: 128 00:07:21.673 Maximum Copy Length: 128 00:07:21.673 Maximum Source Range Count: 128 00:07:21.673 NGUID/EUI64 Never Reused: No 00:07:21.673 Namespace Write Protected: No 00:07:21.673 Endurance group ID: 1 00:07:21.673 Number of LBA Formats: 8 00:07:21.673 Current LBA Format: LBA Format #04 00:07:21.673 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.673 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.673 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.673 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.673 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.673 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.673 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.673 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.673 00:07:21.673 Get Feature FDP: 00:07:21.673 ================ 00:07:21.673 Enabled: Yes 00:07:21.673 FDP configuration index: 0 00:07:21.673 00:07:21.673 FDP configurations log page 00:07:21.673 =========================== 00:07:21.673 Number of FDP configurations: 1 00:07:21.673 Version: 0 00:07:21.673 Size: 112 00:07:21.673 FDP Configuration Descriptor: 0 00:07:21.673 Descriptor Size: 96 00:07:21.673 Reclaim Group Identifier format: 2 00:07:21.673 FDP Volatile Write Cache: Not Present 00:07:21.673 FDP Configuration: Valid 00:07:21.673 Vendor Specific Size: 0 00:07:21.673 Number of Reclaim Groups: 2 00:07:21.673 Number of Recalim Unit Handles: 8 00:07:21.673 Max Placement Identifiers: 128 00:07:21.673 Number of Namespaces Suppprted: 256 00:07:21.673 Reclaim unit Nominal Size: 6000000 bytes 00:07:21.673 Estimated Reclaim Unit Time Limit: Not Reported 00:07:21.673 RUH Desc #000: RUH Type: Initially Isolated 00:07:21.673 RUH Desc #001: RUH Type: Initially Isolated 00:07:21.673 RUH Desc #002: RUH Type: Initially Isolated 00:07:21.673 RUH Desc #003: RUH Type: Initially Isolated 00:07:21.673 RUH Desc #004: RUH Type: Initially Isolated 00:07:21.673 RUH Desc #005: RUH Type: Initially Isolated 00:07:21.673 RUH Desc #006: RUH Type: Initially Isolated 00:07:21.673 RUH Desc #007: RUH Type: Initially Isolated 00:07:21.673 00:07:21.673 FDP reclaim unit handle usage log page 00:07:21.673 ==================================[2024-12-13 22:48:00.795661] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64631 terminated unexpected 00:07:21.673 ==== 00:07:21.673 Number of Reclaim Unit Handles: 8 00:07:21.673 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:21.673 RUH Usage Desc #001: RUH Attributes: Unused 00:07:21.673 RUH Usage Desc #002: RUH Attributes: Unused 00:07:21.673 RUH Usage Desc #003: RUH Attributes: Unused 00:07:21.673 RUH Usage Desc #004: RUH Attributes: Unused 00:07:21.673 RUH Usage Desc #005: RUH Attributes: Unused 00:07:21.673 RUH Usage Desc #006: RUH Attributes: Unused 00:07:21.673 RUH Usage Desc #007: RUH Attributes: Unused 00:07:21.673 00:07:21.673 FDP statistics log page 00:07:21.673 ======================= 00:07:21.673 Host bytes with metadata written: 554737664 00:07:21.673 Media bytes with metadata written: 554815488 00:07:21.673 Media bytes erased: 0 00:07:21.673 00:07:21.673 FDP events log page 00:07:21.673 =================== 00:07:21.673 Number of FDP events: 0 00:07:21.673 00:07:21.673 NVM Specific Namespace Data 00:07:21.673 =========================== 00:07:21.673 Logical Block Storage Tag Mask: 0 00:07:21.673 Protection Information Capabilities: 00:07:21.673 16b Guard Protection Information Storage Tag Support: No 00:07:21.673 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.673 Storage Tag Check Read Support: No 00:07:21.673 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.673 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.673 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.673 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.673 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.673 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.673 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.673 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.673 ===================================================== 00:07:21.673 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:21.673 ===================================================== 00:07:21.673 Controller Capabilities/Features 00:07:21.673 ================================ 00:07:21.673 Vendor ID: 1b36 00:07:21.673 Subsystem Vendor ID: 1af4 00:07:21.673 Serial Number: 12340 00:07:21.673 Model Number: QEMU NVMe Ctrl 00:07:21.673 Firmware Version: 8.0.0 00:07:21.673 Recommended Arb Burst: 6 00:07:21.673 IEEE OUI Identifier: 00 54 52 00:07:21.673 Multi-path I/O 00:07:21.673 May have multiple subsystem ports: No 00:07:21.673 May have multiple controllers: No 00:07:21.673 Associated with SR-IOV VF: No 00:07:21.673 Max Data Transfer Size: 524288 00:07:21.673 Max Number of Namespaces: 256 00:07:21.673 Max Number of I/O Queues: 64 00:07:21.673 NVMe Specification Version (VS): 1.4 00:07:21.673 NVMe Specification Version (Identify): 1.4 00:07:21.673 Maximum Queue Entries: 2048 00:07:21.673 Contiguous Queues Required: Yes 00:07:21.673 Arbitration Mechanisms Supported 00:07:21.673 Weighted Round Robin: Not Supported 00:07:21.673 Vendor Specific: Not Supported 00:07:21.673 Reset Timeout: 7500 ms 00:07:21.673 Doorbell Stride: 4 bytes 00:07:21.673 NVM Subsystem Reset: Not Supported 00:07:21.673 Command Sets Supported 00:07:21.673 NVM Command Set: Supported 00:07:21.673 Boot Partition: Not Supported 00:07:21.673 Memory Page Size Minimum: 4096 bytes 00:07:21.673 Memory Page Size Maximum: 65536 bytes 00:07:21.673 Persistent Memory Region: Not Supported 00:07:21.673 Optional Asynchronous Events Supported 00:07:21.673 Namespace Attribute Notices: Supported 00:07:21.673 Firmware Activation Notices: Not Supported 00:07:21.673 ANA Change Notices: Not Supported 00:07:21.673 PLE Aggregate Log Change Notices: Not Supported 00:07:21.673 LBA Status Info Alert Notices: Not Supported 00:07:21.673 EGE Aggregate Log Change Notices: Not Supported 00:07:21.673 Normal NVM Subsystem Shutdown event: Not Supported 00:07:21.673 Zone Descriptor Change Notices: Not Supported 00:07:21.673 Discovery Log Change Notices: Not Supported 00:07:21.673 Controller Attributes 00:07:21.673 128-bit Host Identifier: Not Supported 00:07:21.673 Non-Operational Permissive Mode: Not Supported 00:07:21.673 NVM Sets: Not Supported 00:07:21.673 Read Recovery Levels: Not Supported 00:07:21.673 Endurance Groups: Not Supported 00:07:21.673 Predictable Latency Mode: Not Supported 00:07:21.673 Traffic Based Keep ALive: Not Supported 00:07:21.673 Namespace Granularity: Not Supported 00:07:21.673 SQ Associations: Not Supported 00:07:21.673 UUID List: Not Supported 00:07:21.673 Multi-Domain Subsystem: Not Supported 00:07:21.673 Fixed Capacity Management: Not Supported 00:07:21.673 Variable Capacity Management: Not Supported 00:07:21.673 Delete Endurance Group: Not Supported 00:07:21.673 Delete NVM Set: Not Supported 00:07:21.673 Extended LBA Formats Supported: Supported 00:07:21.673 Flexible Data Placement Supported: Not Supported 00:07:21.673 00:07:21.673 Controller Memory Buffer Support 00:07:21.673 ================================ 00:07:21.673 Supported: No 00:07:21.673 00:07:21.674 Persistent Memory Region Support 00:07:21.674 ================================ 00:07:21.674 Supported: No 00:07:21.674 00:07:21.674 Admin Command Set Attributes 00:07:21.674 ============================ 00:07:21.674 Security Send/Receive: Not Supported 00:07:21.674 Format NVM: Supported 00:07:21.674 Firmware Activate/Download: Not Supported 00:07:21.674 Namespace Management: Supported 00:07:21.674 Device Self-Test: Not Supported 00:07:21.674 Directives: Supported 00:07:21.674 NVMe-MI: Not Supported 00:07:21.674 Virtualization Management: Not Supported 00:07:21.674 Doorbell Buffer Config: Supported 00:07:21.674 Get LBA Status Capability: Not Supported 00:07:21.674 Command & Feature Lockdown Capability: Not Supported 00:07:21.674 Abort Command Limit: 4 00:07:21.674 Async Event Request Limit: 4 00:07:21.674 Number of Firmware Slots: N/A 00:07:21.674 Firmware Slot 1 Read-Only: N/A 00:07:21.674 Firmware Activation Without Reset: N/A 00:07:21.674 Multiple Update Detection Support: N/A 00:07:21.674 Firmware Update Granularity: No Information Provided 00:07:21.674 Per-Namespace SMART Log: Yes 00:07:21.674 Asymmetric Namespace Access Log Page: Not Supported 00:07:21.674 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:21.674 Command Effects Log Page: Supported 00:07:21.674 Get Log Page Extended Data: Supported 00:07:21.674 Telemetry Log Pages: Not Supported 00:07:21.674 Persistent Event Log Pages: Not Supported 00:07:21.674 Supported Log Pages Log Page: May Support 00:07:21.674 Commands Supported & Effects Log Page: Not Supported 00:07:21.674 Feature Identifiers & Effects Log Page:May Support 00:07:21.674 NVMe-MI Commands & Effects Log Page: May Support 00:07:21.674 Data Area 4 for Telemetry Log: Not Supported 00:07:21.674 Error Log Page Entries Supported: 1 00:07:21.674 Keep Alive: Not Supported 00:07:21.674 00:07:21.674 NVM Command Set Attributes 00:07:21.674 ========================== 00:07:21.674 Submission Queue Entry Size 00:07:21.674 Max: 64 00:07:21.674 Min: 64 00:07:21.674 Completion Queue Entry Size 00:07:21.674 Max: 16 00:07:21.674 Min: 16 00:07:21.674 Number of Namespaces: 256 00:07:21.674 Compare Command: Supported 00:07:21.674 Write Uncorrectable Command: Not Supported 00:07:21.674 Dataset Management Command: Supported 00:07:21.674 Write Zeroes Command: Supported 00:07:21.674 Set Features Save Field: Supported 00:07:21.674 Reservations: Not Supported 00:07:21.674 Timestamp: Supported 00:07:21.674 Copy: Supported 00:07:21.674 Volatile Write Cache: Present 00:07:21.674 Atomic Write Unit (Normal): 1 00:07:21.674 Atomic Write Unit (PFail): 1 00:07:21.674 Atomic Compare & Write Unit: 1 00:07:21.674 Fused Compare & Write: Not Supported 00:07:21.674 Scatter-Gather List 00:07:21.674 SGL Command Set: Supported 00:07:21.674 SGL Keyed: Not Supported 00:07:21.674 SGL Bit Bucket Descriptor: Not Supported 00:07:21.674 SGL Metadata Pointer: Not Supported 00:07:21.674 Oversized SGL: Not Supported 00:07:21.674 SGL Metadata Address: Not Supported 00:07:21.674 SGL Offset: Not Supported 00:07:21.674 Transport SGL Data Block: Not Supported 00:07:21.674 Replay Protected Memory Block: Not Supported 00:07:21.674 00:07:21.674 Firmware Slot Information 00:07:21.674 ========================= 00:07:21.674 Active slot: 1 00:07:21.674 Slot 1 Firmware Revision: 1.0 00:07:21.674 00:07:21.674 00:07:21.674 Commands Supported and Effects 00:07:21.674 ============================== 00:07:21.674 Admin Commands 00:07:21.674 -------------- 00:07:21.674 Delete I/O Submission Queue (00h): Supported 00:07:21.674 Create I/O Submission Queue (01h): Supported 00:07:21.674 Get Log Page (02h): Supported 00:07:21.674 Delete I/O Completion Queue (04h): Supported 00:07:21.674 Create I/O Completion Queue (05h): Supported 00:07:21.674 Identify (06h): Supported 00:07:21.674 Abort (08h): Supported 00:07:21.674 Set Features (09h): Supported 00:07:21.674 Get Features (0Ah): Supported 00:07:21.674 Asynchronous Event Request (0Ch): Supported 00:07:21.674 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:21.674 Directive Send (19h): Supported 00:07:21.674 Directive Receive (1Ah): Supported 00:07:21.674 Virtualization Management (1Ch): Supported 00:07:21.674 Doorbell Buffer Config (7Ch): Supported 00:07:21.674 Format NVM (80h): Supported LBA-Change 00:07:21.674 I/O Commands 00:07:21.674 ------------ 00:07:21.674 Flush (00h): Supported LBA-Change 00:07:21.674 Write (01h): Supported LBA-Change 00:07:21.674 Read (02h): Supported 00:07:21.674 Compare (05h): Supported 00:07:21.674 Write Zeroes (08h): Supported LBA-Change 00:07:21.674 Dataset Management (09h): Supported LBA-Change 00:07:21.674 Unknown (0Ch): Supported 00:07:21.674 Unknown (12h): Supported 00:07:21.674 Copy (19h): Supported LBA-Change 00:07:21.674 Unknown (1Dh): Supported LBA-Change 00:07:21.674 00:07:21.674 Error Log 00:07:21.674 ========= 00:07:21.674 00:07:21.674 Arbitration 00:07:21.674 =========== 00:07:21.674 Arbitration Burst: no limit 00:07:21.674 00:07:21.674 Power Management 00:07:21.674 ================ 00:07:21.674 Number of Power States: 1 00:07:21.674 Current Power State: Power State #0 00:07:21.674 Power State #0: 00:07:21.674 Max Power: 25.00 W 00:07:21.674 Non-Operational State: Operational 00:07:21.674 Entry Latency: 16 microseconds 00:07:21.674 Exit Latency: 4 microseconds 00:07:21.674 Relative Read Throughput: 0 00:07:21.674 Relative Read Latency: 0 00:07:21.674 Relative Write Throughput: 0 00:07:21.674 Relative Write Latency: 0 00:07:21.674 Idle Power: Not Reported 00:07:21.674 Active Power: Not Reported 00:07:21.674 Non-Operational Permissive Mode: Not Supported 00:07:21.674 00:07:21.674 Health Information 00:07:21.674 ================== 00:07:21.674 Critical Warnings: 00:07:21.674 Available Spare Space: OK 00:07:21.674 Temperature: OK 00:07:21.674 Device Reliability: OK 00:07:21.674 Read Only: No 00:07:21.674 Volatile Memory Backup: OK 00:07:21.674 Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.674 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:21.674 Available Spare: 0% 00:07:21.674 Available Spare Threshold: 0% 00:07:21.674 Life Percentage Used: 0% 00:07:21.674 Data Units Read: 674 00:07:21.674 Data Units Written: 602 00:07:21.674 Host Read Commands: 38464 00:07:21.674 Host Write Commands: 38250 00:07:21.674 Controller Busy Time: 0 minutes 00:07:21.674 Power Cycles: 0 00:07:21.674 Power On Hours: 0 hours 00:07:21.674 Unsafe Shutdowns: 0 00:07:21.674 Unrecoverable Media Errors: 0 00:07:21.674 Lifetime Error Log Entries: 0 00:07:21.674 Warning Temperature Time: 0 minutes 00:07:21.674 Critical Temperature Time: 0 minutes 00:07:21.674 00:07:21.674 Number of Queues 00:07:21.674 ================ 00:07:21.674 Number of I/O Submission Queues: 64 00:07:21.674 Number of I/O Completion Queues: 64 00:07:21.674 00:07:21.674 ZNS Specific Controller Data 00:07:21.674 ============================ 00:07:21.674 Zone Append Size Limit: 0 00:07:21.674 00:07:21.674 00:07:21.674 Active Namespaces 00:07:21.674 ================= 00:07:21.674 Namespace ID:1 00:07:21.674 Error Recovery Timeout: Unlimited 00:07:21.674 Command Set Identifier: NVM (00h) 00:07:21.674 Deallocate: Supported 00:07:21.674 Deallocated/Unwritten Error: Supported 00:07:21.674 Deallocated Read Value: All 0x00 00:07:21.674 Deallocate in Write Zeroes: Not Supported 00:07:21.674 Deallocated Guard Field: 0xFFFF 00:07:21.674 Flush: Supported 00:07:21.674 Reservation: Not Supported 00:07:21.674 Metadata Transferred as: Separate Metadata Buffer 00:07:21.674 Namespace Sharing Capabilities: Private 00:07:21.674 Size (in LBAs): 1548666 (5GiB) 00:07:21.674 Capacity (in LBAs): 1548666 (5GiB) 00:07:21.674 Utilization (in LBAs): 1548666 (5GiB) 00:07:21.674 Thin Provisioning: Not Supported 00:07:21.674 Per-NS Atomic Units: No 00:07:21.674 Maximum Single Source Range Length: 128 00:07:21.674 Maximum Copy Length: 128 00:07:21.674 Maximum Source Range Count: 128 00:07:21.674 NGUID/EUI64 Never Reused: No 00:07:21.674 Namespace Write Protected: No 00:07:21.674 Number of LBA Formats: 8 00:07:21.674 Current LBA Format: [2024-12-13 22:48:00.796600] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64631 terminated unexpected 00:07:21.674 LBA Format #07 00:07:21.674 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.674 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.674 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.674 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.674 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.674 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.674 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.674 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.674 00:07:21.674 NVM Specific Namespace Data 00:07:21.674 =========================== 00:07:21.674 Logical Block Storage Tag Mask: 0 00:07:21.674 Protection Information Capabilities: 00:07:21.674 16b Guard Protection Information Storage Tag Support: No 00:07:21.674 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.674 Storage Tag Check Read Support: No 00:07:21.675 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.675 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.675 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.675 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.675 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.675 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.675 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.675 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.675 ===================================================== 00:07:21.675 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:21.675 ===================================================== 00:07:21.675 Controller Capabilities/Features 00:07:21.675 ================================ 00:07:21.675 Vendor ID: 1b36 00:07:21.675 Subsystem Vendor ID: 1af4 00:07:21.675 Serial Number: 12341 00:07:21.675 Model Number: QEMU NVMe Ctrl 00:07:21.675 Firmware Version: 8.0.0 00:07:21.675 Recommended Arb Burst: 6 00:07:21.675 IEEE OUI Identifier: 00 54 52 00:07:21.675 Multi-path I/O 00:07:21.675 May have multiple subsystem ports: No 00:07:21.675 May have multiple controllers: No 00:07:21.675 Associated with SR-IOV VF: No 00:07:21.675 Max Data Transfer Size: 524288 00:07:21.675 Max Number of Namespaces: 256 00:07:21.675 Max Number of I/O Queues: 64 00:07:21.675 NVMe Specification Version (VS): 1.4 00:07:21.675 NVMe Specification Version (Identify): 1.4 00:07:21.675 Maximum Queue Entries: 2048 00:07:21.675 Contiguous Queues Required: Yes 00:07:21.675 Arbitration Mechanisms Supported 00:07:21.675 Weighted Round Robin: Not Supported 00:07:21.675 Vendor Specific: Not Supported 00:07:21.675 Reset Timeout: 7500 ms 00:07:21.675 Doorbell Stride: 4 bytes 00:07:21.675 NVM Subsystem Reset: Not Supported 00:07:21.675 Command Sets Supported 00:07:21.675 NVM Command Set: Supported 00:07:21.675 Boot Partition: Not Supported 00:07:21.675 Memory Page Size Minimum: 4096 bytes 00:07:21.675 Memory Page Size Maximum: 65536 bytes 00:07:21.675 Persistent Memory Region: Not Supported 00:07:21.675 Optional Asynchronous Events Supported 00:07:21.675 Namespace Attribute Notices: Supported 00:07:21.675 Firmware Activation Notices: Not Supported 00:07:21.675 ANA Change Notices: Not Supported 00:07:21.675 PLE Aggregate Log Change Notices: Not Supported 00:07:21.675 LBA Status Info Alert Notices: Not Supported 00:07:21.675 EGE Aggregate Log Change Notices: Not Supported 00:07:21.675 Normal NVM Subsystem Shutdown event: Not Supported 00:07:21.675 Zone Descriptor Change Notices: Not Supported 00:07:21.675 Discovery Log Change Notices: Not Supported 00:07:21.675 Controller Attributes 00:07:21.675 128-bit Host Identifier: Not Supported 00:07:21.675 Non-Operational Permissive Mode: Not Supported 00:07:21.675 NVM Sets: Not Supported 00:07:21.675 Read Recovery Levels: Not Supported 00:07:21.675 Endurance Groups: Not Supported 00:07:21.675 Predictable Latency Mode: Not Supported 00:07:21.675 Traffic Based Keep ALive: Not Supported 00:07:21.675 Namespace Granularity: Not Supported 00:07:21.675 SQ Associations: Not Supported 00:07:21.675 UUID List: Not Supported 00:07:21.675 Multi-Domain Subsystem: Not Supported 00:07:21.675 Fixed Capacity Management: Not Supported 00:07:21.675 Variable Capacity Management: Not Supported 00:07:21.675 Delete Endurance Group: Not Supported 00:07:21.675 Delete NVM Set: Not Supported 00:07:21.675 Extended LBA Formats Supported: Supported 00:07:21.675 Flexible Data Placement Supported: Not Supported 00:07:21.675 00:07:21.675 Controller Memory Buffer Support 00:07:21.675 ================================ 00:07:21.675 Supported: No 00:07:21.675 00:07:21.675 Persistent Memory Region Support 00:07:21.675 ================================ 00:07:21.675 Supported: No 00:07:21.675 00:07:21.675 Admin Command Set Attributes 00:07:21.675 ============================ 00:07:21.675 Security Send/Receive: Not Supported 00:07:21.675 Format NVM: Supported 00:07:21.675 Firmware Activate/Download: Not Supported 00:07:21.675 Namespace Management: Supported 00:07:21.675 Device Self-Test: Not Supported 00:07:21.675 Directives: Supported 00:07:21.675 NVMe-MI: Not Supported 00:07:21.675 Virtualization Management: Not Supported 00:07:21.675 Doorbell Buffer Config: Supported 00:07:21.675 Get LBA Status Capability: Not Supported 00:07:21.675 Command & Feature Lockdown Capability: Not Supported 00:07:21.675 Abort Command Limit: 4 00:07:21.675 Async Event Request Limit: 4 00:07:21.675 Number of Firmware Slots: N/A 00:07:21.675 Firmware Slot 1 Read-Only: N/A 00:07:21.675 Firmware Activation Without Reset: N/A 00:07:21.675 Multiple Update Detection Support: N/A 00:07:21.675 Firmware Update Granularity: No Information Provided 00:07:21.675 Per-Namespace SMART Log: Yes 00:07:21.675 Asymmetric Namespace Access Log Page: Not Supported 00:07:21.675 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:21.675 Command Effects Log Page: Supported 00:07:21.675 Get Log Page Extended Data: Supported 00:07:21.675 Telemetry Log Pages: Not Supported 00:07:21.675 Persistent Event Log Pages: Not Supported 00:07:21.675 Supported Log Pages Log Page: May Support 00:07:21.675 Commands Supported & Effects Log Page: Not Supported 00:07:21.675 Feature Identifiers & Effects Log Page:May Support 00:07:21.675 NVMe-MI Commands & Effects Log Page: May Support 00:07:21.675 Data Area 4 for Telemetry Log: Not Supported 00:07:21.675 Error Log Page Entries Supported: 1 00:07:21.675 Keep Alive: Not Supported 00:07:21.675 00:07:21.675 NVM Command Set Attributes 00:07:21.675 ========================== 00:07:21.675 Submission Queue Entry Size 00:07:21.675 Max: 64 00:07:21.675 Min: 64 00:07:21.675 Completion Queue Entry Size 00:07:21.675 Max: 16 00:07:21.675 Min: 16 00:07:21.675 Number of Namespaces: 256 00:07:21.675 Compare Command: Supported 00:07:21.675 Write Uncorrectable Command: Not Supported 00:07:21.675 Dataset Management Command: Supported 00:07:21.675 Write Zeroes Command: Supported 00:07:21.675 Set Features Save Field: Supported 00:07:21.675 Reservations: Not Supported 00:07:21.675 Timestamp: Supported 00:07:21.675 Copy: Supported 00:07:21.675 Volatile Write Cache: Present 00:07:21.675 Atomic Write Unit (Normal): 1 00:07:21.675 Atomic Write Unit (PFail): 1 00:07:21.675 Atomic Compare & Write Unit: 1 00:07:21.675 Fused Compare & Write: Not Supported 00:07:21.675 Scatter-Gather List 00:07:21.675 SGL Command Set: Supported 00:07:21.675 SGL Keyed: Not Supported 00:07:21.675 SGL Bit Bucket Descriptor: Not Supported 00:07:21.675 SGL Metadata Pointer: Not Supported 00:07:21.675 Oversized SGL: Not Supported 00:07:21.675 SGL Metadata Address: Not Supported 00:07:21.675 SGL Offset: Not Supported 00:07:21.675 Transport SGL Data Block: Not Supported 00:07:21.675 Replay Protected Memory Block: Not Supported 00:07:21.675 00:07:21.675 Firmware Slot Information 00:07:21.675 ========================= 00:07:21.675 Active slot: 1 00:07:21.675 Slot 1 Firmware Revision: 1.0 00:07:21.675 00:07:21.675 00:07:21.675 Commands Supported and Effects 00:07:21.675 ============================== 00:07:21.675 Admin Commands 00:07:21.675 -------------- 00:07:21.675 Delete I/O Submission Queue (00h): Supported 00:07:21.675 Create I/O Submission Queue (01h): Supported 00:07:21.675 Get Log Page (02h): Supported 00:07:21.675 Delete I/O Completion Queue (04h): Supported 00:07:21.675 Create I/O Completion Queue (05h): Supported 00:07:21.675 Identify (06h): Supported 00:07:21.675 Abort (08h): Supported 00:07:21.675 Set Features (09h): Supported 00:07:21.675 Get Features (0Ah): Supported 00:07:21.675 Asynchronous Event Request (0Ch): Supported 00:07:21.675 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:21.675 Directive Send (19h): Supported 00:07:21.675 Directive Receive (1Ah): Supported 00:07:21.675 Virtualization Management (1Ch): Supported 00:07:21.675 Doorbell Buffer Config (7Ch): Supported 00:07:21.675 Format NVM (80h): Supported LBA-Change 00:07:21.675 I/O Commands 00:07:21.675 ------------ 00:07:21.675 Flush (00h): Supported LBA-Change 00:07:21.675 Write (01h): Supported LBA-Change 00:07:21.675 Read (02h): Supported 00:07:21.675 Compare (05h): Supported 00:07:21.675 Write Zeroes (08h): Supported LBA-Change 00:07:21.675 Dataset Management (09h): Supported LBA-Change 00:07:21.675 Unknown (0Ch): Supported 00:07:21.675 Unknown (12h): Supported 00:07:21.675 Copy (19h): Supported LBA-Change 00:07:21.675 Unknown (1Dh): Supported LBA-Change 00:07:21.675 00:07:21.675 Error Log 00:07:21.675 ========= 00:07:21.675 00:07:21.675 Arbitration 00:07:21.676 =========== 00:07:21.676 Arbitration Burst: no limit 00:07:21.676 00:07:21.676 Power Management 00:07:21.676 ================ 00:07:21.676 Number of Power States: 1 00:07:21.676 Current Power State: Power State #0 00:07:21.676 Power State #0: 00:07:21.676 Max Power: 25.00 W 00:07:21.676 Non-Operational State: Operational 00:07:21.676 Entry Latency: 16 microseconds 00:07:21.676 Exit Latency: 4 microseconds 00:07:21.676 Relative Read Throughput: 0 00:07:21.676 Relative Read Latency: 0 00:07:21.676 Relative Write Throughput: 0 00:07:21.676 Relative Write Latency: 0 00:07:21.676 Idle Power: Not Reported 00:07:21.676 Active Power: Not Reported 00:07:21.676 Non-Operational Permissive Mode: Not Supported 00:07:21.676 00:07:21.676 Health Information 00:07:21.676 ================== 00:07:21.676 Critical Warnings: 00:07:21.676 Available Spare Space: OK 00:07:21.676 Temperature: OK 00:07:21.676 Device Reliability: OK 00:07:21.676 Read Only: No 00:07:21.676 Volatile Memory Backup: OK 00:07:21.676 Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.676 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:21.676 Available Spare: 0% 00:07:21.676 Available Spare Threshold: 0% 00:07:21.676 Life Percentage Used: 0% 00:07:21.676 Data Units Read: 1030 00:07:21.676 Data Units Written: 907 00:07:21.676 Host Read Commands: 56990 00:07:21.676 Host Write Commands: 55934 00:07:21.676 Controller Busy Time: 0 minutes 00:07:21.676 Power Cycles: 0 00:07:21.676 Power On Hours: 0 hours 00:07:21.676 Unsafe Shutdowns: 0 00:07:21.676 Unrecoverable Media Errors: 0 00:07:21.676 Lifetime Error Log Entries: 0 00:07:21.676 Warning Temperature Time: 0 minutes 00:07:21.676 Critical Temperature Time: 0 minutes 00:07:21.676 00:07:21.676 Number of Queues 00:07:21.676 ================ 00:07:21.676 Number of I/O Submission Queues: 64 00:07:21.676 Number of I/O Completion Queues: 64 00:07:21.676 00:07:21.676 ZNS Specific Controller Data 00:07:21.676 ============================ 00:07:21.676 Zone Append Size Limit: 0 00:07:21.676 00:07:21.676 00:07:21.676 Active Namespaces 00:07:21.676 ================= 00:07:21.676 Namespace ID:1 00:07:21.676 Error Recovery Timeout: Unlimited 00:07:21.676 Command Set Identifier: NVM (00h) 00:07:21.676 Deallocate: Supported 00:07:21.676 Deallocated/Unwritten Error: Supported 00:07:21.676 Deallocated Read Value: All 0x00 00:07:21.676 Deallocate in Write Zeroes: Not Supported 00:07:21.676 Deallocated Guard Field: 0xFFFF 00:07:21.676 Flush: Supported 00:07:21.676 Reservation: Not Supported 00:07:21.676 Namespace Sharing Capabilities: Private 00:07:21.676 Size (in LBAs): 1310720 (5GiB) 00:07:21.676 Capacity (in LBAs): 1310720 (5GiB) 00:07:21.676 Utilization (in LBAs): 1310720 (5GiB) 00:07:21.676 Thin Provisioning: Not Supported 00:07:21.676 Per-NS Atomic Units: No 00:07:21.676 Maximum Single Source Range Length: 128 00:07:21.676 Maximum Copy Length: 128 00:07:21.676 Maximum Source Range Count: 128 00:07:21.676 NGUID/EUI64 Never Reused: No 00:07:21.676 Namespace Write Protected: No 00:07:21.676 Number of LBA Formats: 8 00:07:21.676 Current LBA Format: LBA Format #04 00:07:21.676 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.676 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.676 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.676 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.676 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.676 LBA Forma[2024-12-13 22:48:00.797540] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64631 terminated unexpected 00:07:21.676 t #05: Data Size: 4096 Metadata Size: 8 00:07:21.676 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.676 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.676 00:07:21.676 NVM Specific Namespace Data 00:07:21.676 =========================== 00:07:21.676 Logical Block Storage Tag Mask: 0 00:07:21.676 Protection Information Capabilities: 00:07:21.676 16b Guard Protection Information Storage Tag Support: No 00:07:21.676 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.676 Storage Tag Check Read Support: No 00:07:21.676 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.676 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.676 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.676 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.676 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.676 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.676 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.676 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.676 ===================================================== 00:07:21.676 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:21.676 ===================================================== 00:07:21.676 Controller Capabilities/Features 00:07:21.676 ================================ 00:07:21.676 Vendor ID: 1b36 00:07:21.676 Subsystem Vendor ID: 1af4 00:07:21.676 Serial Number: 12342 00:07:21.676 Model Number: QEMU NVMe Ctrl 00:07:21.676 Firmware Version: 8.0.0 00:07:21.676 Recommended Arb Burst: 6 00:07:21.676 IEEE OUI Identifier: 00 54 52 00:07:21.676 Multi-path I/O 00:07:21.676 May have multiple subsystem ports: No 00:07:21.676 May have multiple controllers: No 00:07:21.676 Associated with SR-IOV VF: No 00:07:21.676 Max Data Transfer Size: 524288 00:07:21.676 Max Number of Namespaces: 256 00:07:21.676 Max Number of I/O Queues: 64 00:07:21.676 NVMe Specification Version (VS): 1.4 00:07:21.676 NVMe Specification Version (Identify): 1.4 00:07:21.676 Maximum Queue Entries: 2048 00:07:21.676 Contiguous Queues Required: Yes 00:07:21.676 Arbitration Mechanisms Supported 00:07:21.676 Weighted Round Robin: Not Supported 00:07:21.676 Vendor Specific: Not Supported 00:07:21.676 Reset Timeout: 7500 ms 00:07:21.676 Doorbell Stride: 4 bytes 00:07:21.676 NVM Subsystem Reset: Not Supported 00:07:21.676 Command Sets Supported 00:07:21.676 NVM Command Set: Supported 00:07:21.676 Boot Partition: Not Supported 00:07:21.676 Memory Page Size Minimum: 4096 bytes 00:07:21.676 Memory Page Size Maximum: 65536 bytes 00:07:21.676 Persistent Memory Region: Not Supported 00:07:21.676 Optional Asynchronous Events Supported 00:07:21.676 Namespace Attribute Notices: Supported 00:07:21.676 Firmware Activation Notices: Not Supported 00:07:21.676 ANA Change Notices: Not Supported 00:07:21.676 PLE Aggregate Log Change Notices: Not Supported 00:07:21.676 LBA Status Info Alert Notices: Not Supported 00:07:21.676 EGE Aggregate Log Change Notices: Not Supported 00:07:21.676 Normal NVM Subsystem Shutdown event: Not Supported 00:07:21.676 Zone Descriptor Change Notices: Not Supported 00:07:21.676 Discovery Log Change Notices: Not Supported 00:07:21.676 Controller Attributes 00:07:21.676 128-bit Host Identifier: Not Supported 00:07:21.676 Non-Operational Permissive Mode: Not Supported 00:07:21.676 NVM Sets: Not Supported 00:07:21.676 Read Recovery Levels: Not Supported 00:07:21.676 Endurance Groups: Not Supported 00:07:21.676 Predictable Latency Mode: Not Supported 00:07:21.676 Traffic Based Keep ALive: Not Supported 00:07:21.676 Namespace Granularity: Not Supported 00:07:21.676 SQ Associations: Not Supported 00:07:21.676 UUID List: Not Supported 00:07:21.676 Multi-Domain Subsystem: Not Supported 00:07:21.676 Fixed Capacity Management: Not Supported 00:07:21.676 Variable Capacity Management: Not Supported 00:07:21.676 Delete Endurance Group: Not Supported 00:07:21.676 Delete NVM Set: Not Supported 00:07:21.676 Extended LBA Formats Supported: Supported 00:07:21.676 Flexible Data Placement Supported: Not Supported 00:07:21.676 00:07:21.676 Controller Memory Buffer Support 00:07:21.677 ================================ 00:07:21.677 Supported: No 00:07:21.677 00:07:21.677 Persistent Memory Region Support 00:07:21.677 ================================ 00:07:21.677 Supported: No 00:07:21.677 00:07:21.677 Admin Command Set Attributes 00:07:21.677 ============================ 00:07:21.677 Security Send/Receive: Not Supported 00:07:21.677 Format NVM: Supported 00:07:21.677 Firmware Activate/Download: Not Supported 00:07:21.677 Namespace Management: Supported 00:07:21.677 Device Self-Test: Not Supported 00:07:21.677 Directives: Supported 00:07:21.677 NVMe-MI: Not Supported 00:07:21.677 Virtualization Management: Not Supported 00:07:21.677 Doorbell Buffer Config: Supported 00:07:21.677 Get LBA Status Capability: Not Supported 00:07:21.677 Command & Feature Lockdown Capability: Not Supported 00:07:21.677 Abort Command Limit: 4 00:07:21.677 Async Event Request Limit: 4 00:07:21.677 Number of Firmware Slots: N/A 00:07:21.677 Firmware Slot 1 Read-Only: N/A 00:07:21.677 Firmware Activation Without Reset: N/A 00:07:21.677 Multiple Update Detection Support: N/A 00:07:21.677 Firmware Update Granularity: No Information Provided 00:07:21.677 Per-Namespace SMART Log: Yes 00:07:21.677 Asymmetric Namespace Access Log Page: Not Supported 00:07:21.677 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:21.677 Command Effects Log Page: Supported 00:07:21.677 Get Log Page Extended Data: Supported 00:07:21.677 Telemetry Log Pages: Not Supported 00:07:21.677 Persistent Event Log Pages: Not Supported 00:07:21.677 Supported Log Pages Log Page: May Support 00:07:21.677 Commands Supported & Effects Log Page: Not Supported 00:07:21.677 Feature Identifiers & Effects Log Page:May Support 00:07:21.677 NVMe-MI Commands & Effects Log Page: May Support 00:07:21.677 Data Area 4 for Telemetry Log: Not Supported 00:07:21.677 Error Log Page Entries Supported: 1 00:07:21.677 Keep Alive: Not Supported 00:07:21.677 00:07:21.677 NVM Command Set Attributes 00:07:21.677 ========================== 00:07:21.677 Submission Queue Entry Size 00:07:21.677 Max: 64 00:07:21.677 Min: 64 00:07:21.677 Completion Queue Entry Size 00:07:21.677 Max: 16 00:07:21.677 Min: 16 00:07:21.677 Number of Namespaces: 256 00:07:21.677 Compare Command: Supported 00:07:21.677 Write Uncorrectable Command: Not Supported 00:07:21.677 Dataset Management Command: Supported 00:07:21.677 Write Zeroes Command: Supported 00:07:21.677 Set Features Save Field: Supported 00:07:21.677 Reservations: Not Supported 00:07:21.677 Timestamp: Supported 00:07:21.677 Copy: Supported 00:07:21.677 Volatile Write Cache: Present 00:07:21.677 Atomic Write Unit (Normal): 1 00:07:21.677 Atomic Write Unit (PFail): 1 00:07:21.677 Atomic Compare & Write Unit: 1 00:07:21.677 Fused Compare & Write: Not Supported 00:07:21.677 Scatter-Gather List 00:07:21.677 SGL Command Set: Supported 00:07:21.677 SGL Keyed: Not Supported 00:07:21.677 SGL Bit Bucket Descriptor: Not Supported 00:07:21.677 SGL Metadata Pointer: Not Supported 00:07:21.677 Oversized SGL: Not Supported 00:07:21.677 SGL Metadata Address: Not Supported 00:07:21.677 SGL Offset: Not Supported 00:07:21.677 Transport SGL Data Block: Not Supported 00:07:21.677 Replay Protected Memory Block: Not Supported 00:07:21.677 00:07:21.677 Firmware Slot Information 00:07:21.677 ========================= 00:07:21.677 Active slot: 1 00:07:21.677 Slot 1 Firmware Revision: 1.0 00:07:21.677 00:07:21.677 00:07:21.677 Commands Supported and Effects 00:07:21.677 ============================== 00:07:21.677 Admin Commands 00:07:21.677 -------------- 00:07:21.677 Delete I/O Submission Queue (00h): Supported 00:07:21.677 Create I/O Submission Queue (01h): Supported 00:07:21.677 Get Log Page (02h): Supported 00:07:21.677 Delete I/O Completion Queue (04h): Supported 00:07:21.677 Create I/O Completion Queue (05h): Supported 00:07:21.677 Identify (06h): Supported 00:07:21.677 Abort (08h): Supported 00:07:21.677 Set Features (09h): Supported 00:07:21.677 Get Features (0Ah): Supported 00:07:21.677 Asynchronous Event Request (0Ch): Supported 00:07:21.677 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:21.677 Directive Send (19h): Supported 00:07:21.677 Directive Receive (1Ah): Supported 00:07:21.677 Virtualization Management (1Ch): Supported 00:07:21.677 Doorbell Buffer Config (7Ch): Supported 00:07:21.677 Format NVM (80h): Supported LBA-Change 00:07:21.677 I/O Commands 00:07:21.677 ------------ 00:07:21.677 Flush (00h): Supported LBA-Change 00:07:21.677 Write (01h): Supported LBA-Change 00:07:21.677 Read (02h): Supported 00:07:21.677 Compare (05h): Supported 00:07:21.677 Write Zeroes (08h): Supported LBA-Change 00:07:21.677 Dataset Management (09h): Supported LBA-Change 00:07:21.677 Unknown (0Ch): Supported 00:07:21.677 Unknown (12h): Supported 00:07:21.677 Copy (19h): Supported LBA-Change 00:07:21.677 Unknown (1Dh): Supported LBA-Change 00:07:21.677 00:07:21.677 Error Log 00:07:21.677 ========= 00:07:21.677 00:07:21.677 Arbitration 00:07:21.677 =========== 00:07:21.677 Arbitration Burst: no limit 00:07:21.677 00:07:21.677 Power Management 00:07:21.677 ================ 00:07:21.677 Number of Power States: 1 00:07:21.677 Current Power State: Power State #0 00:07:21.677 Power State #0: 00:07:21.677 Max Power: 25.00 W 00:07:21.677 Non-Operational State: Operational 00:07:21.677 Entry Latency: 16 microseconds 00:07:21.677 Exit Latency: 4 microseconds 00:07:21.677 Relative Read Throughput: 0 00:07:21.677 Relative Read Latency: 0 00:07:21.677 Relative Write Throughput: 0 00:07:21.677 Relative Write Latency: 0 00:07:21.677 Idle Power: Not Reported 00:07:21.677 Active Power: Not Reported 00:07:21.677 Non-Operational Permissive Mode: Not Supported 00:07:21.677 00:07:21.677 Health Information 00:07:21.677 ================== 00:07:21.677 Critical Warnings: 00:07:21.677 Available Spare Space: OK 00:07:21.677 Temperature: OK 00:07:21.677 Device Reliability: OK 00:07:21.677 Read Only: No 00:07:21.677 Volatile Memory Backup: OK 00:07:21.677 Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.677 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:21.677 Available Spare: 0% 00:07:21.677 Available Spare Threshold: 0% 00:07:21.677 Life Percentage Used: 0% 00:07:21.677 Data Units Read: 2248 00:07:21.677 Data Units Written: 2035 00:07:21.677 Host Read Commands: 118183 00:07:21.677 Host Write Commands: 116452 00:07:21.677 Controller Busy Time: 0 minutes 00:07:21.677 Power Cycles: 0 00:07:21.677 Power On Hours: 0 hours 00:07:21.677 Unsafe Shutdowns: 0 00:07:21.677 Unrecoverable Media Errors: 0 00:07:21.677 Lifetime Error Log Entries: 0 00:07:21.677 Warning Temperature Time: 0 minutes 00:07:21.677 Critical Temperature Time: 0 minutes 00:07:21.677 00:07:21.677 Number of Queues 00:07:21.677 ================ 00:07:21.677 Number of I/O Submission Queues: 64 00:07:21.677 Number of I/O Completion Queues: 64 00:07:21.677 00:07:21.677 ZNS Specific Controller Data 00:07:21.677 ============================ 00:07:21.677 Zone Append Size Limit: 0 00:07:21.677 00:07:21.677 00:07:21.677 Active Namespaces 00:07:21.677 ================= 00:07:21.677 Namespace ID:1 00:07:21.677 Error Recovery Timeout: Unlimited 00:07:21.677 Command Set Identifier: NVM (00h) 00:07:21.677 Deallocate: Supported 00:07:21.677 Deallocated/Unwritten Error: Supported 00:07:21.677 Deallocated Read Value: All 0x00 00:07:21.677 Deallocate in Write Zeroes: Not Supported 00:07:21.677 Deallocated Guard Field: 0xFFFF 00:07:21.677 Flush: Supported 00:07:21.677 Reservation: Not Supported 00:07:21.677 Namespace Sharing Capabilities: Private 00:07:21.677 Size (in LBAs): 1048576 (4GiB) 00:07:21.677 Capacity (in LBAs): 1048576 (4GiB) 00:07:21.677 Utilization (in LBAs): 1048576 (4GiB) 00:07:21.677 Thin Provisioning: Not Supported 00:07:21.677 Per-NS Atomic Units: No 00:07:21.677 Maximum Single Source Range Length: 128 00:07:21.677 Maximum Copy Length: 128 00:07:21.677 Maximum Source Range Count: 128 00:07:21.677 NGUID/EUI64 Never Reused: No 00:07:21.677 Namespace Write Protected: No 00:07:21.677 Number of LBA Formats: 8 00:07:21.677 Current LBA Format: LBA Format #04 00:07:21.677 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.677 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.677 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.677 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.677 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.677 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.677 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.677 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.677 00:07:21.677 NVM Specific Namespace Data 00:07:21.677 =========================== 00:07:21.677 Logical Block Storage Tag Mask: 0 00:07:21.677 Protection Information Capabilities: 00:07:21.678 16b Guard Protection Information Storage Tag Support: No 00:07:21.678 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.678 Storage Tag Check Read Support: No 00:07:21.678 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Namespace ID:2 00:07:21.678 Error Recovery Timeout: Unlimited 00:07:21.678 Command Set Identifier: NVM (00h) 00:07:21.678 Deallocate: Supported 00:07:21.678 Deallocated/Unwritten Error: Supported 00:07:21.678 Deallocated Read Value: All 0x00 00:07:21.678 Deallocate in Write Zeroes: Not Supported 00:07:21.678 Deallocated Guard Field: 0xFFFF 00:07:21.678 Flush: Supported 00:07:21.678 Reservation: Not Supported 00:07:21.678 Namespace Sharing Capabilities: Private 00:07:21.678 Size (in LBAs): 1048576 (4GiB) 00:07:21.678 Capacity (in LBAs): 1048576 (4GiB) 00:07:21.678 Utilization (in LBAs): 1048576 (4GiB) 00:07:21.678 Thin Provisioning: Not Supported 00:07:21.678 Per-NS Atomic Units: No 00:07:21.678 Maximum Single Source Range Length: 128 00:07:21.678 Maximum Copy Length: 128 00:07:21.678 Maximum Source Range Count: 128 00:07:21.678 NGUID/EUI64 Never Reused: No 00:07:21.678 Namespace Write Protected: No 00:07:21.678 Number of LBA Formats: 8 00:07:21.678 Current LBA Format: LBA Format #04 00:07:21.678 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.678 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.678 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.678 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.678 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.678 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.678 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.678 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.678 00:07:21.678 NVM Specific Namespace Data 00:07:21.678 =========================== 00:07:21.678 Logical Block Storage Tag Mask: 0 00:07:21.678 Protection Information Capabilities: 00:07:21.678 16b Guard Protection Information Storage Tag Support: No 00:07:21.678 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.678 Storage Tag Check Read Support: No 00:07:21.678 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.678 Namespace ID:3 00:07:21.678 Error Recovery Timeout: Unlimited 00:07:21.678 Command Set Identifier: NVM (00h) 00:07:21.678 Deallocate: Supported 00:07:21.678 Deallocated/Unwritten Error: Supported 00:07:21.678 Deallocated Read Value: All 0x00 00:07:21.678 Deallocate in Write Zeroes: Not Supported 00:07:21.678 Deallocated Guard Field: 0xFFFF 00:07:21.678 Flush: Supported 00:07:21.678 Reservation: Not Supported 00:07:21.678 Namespace Sharing Capabilities: Private 00:07:21.678 Size (in LBAs): 1048576 (4GiB) 00:07:21.937 Capacity (in LBAs): 1048576 (4GiB) 00:07:21.937 Utilization (in LBAs): 1048576 (4GiB) 00:07:21.937 Thin Provisioning: Not Supported 00:07:21.937 Per-NS Atomic Units: No 00:07:21.937 Maximum Single Source Range Length: 128 00:07:21.937 Maximum Copy Length: 128 00:07:21.937 Maximum Source Range Count: 128 00:07:21.937 NGUID/EUI64 Never Reused: No 00:07:21.937 Namespace Write Protected: No 00:07:21.937 Number of LBA Formats: 8 00:07:21.937 Current LBA Format: LBA Format #04 00:07:21.937 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.937 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.937 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.937 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.937 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.937 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.937 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.937 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.937 00:07:21.937 NVM Specific Namespace Data 00:07:21.937 =========================== 00:07:21.937 Logical Block Storage Tag Mask: 0 00:07:21.937 Protection Information Capabilities: 00:07:21.937 16b Guard Protection Information Storage Tag Support: No 00:07:21.937 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.937 Storage Tag Check Read Support: No 00:07:21.937 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.937 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.937 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.937 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.937 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.937 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.937 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.937 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.937 22:48:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:21.937 22:48:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:21.937 ===================================================== 00:07:21.937 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:21.937 ===================================================== 00:07:21.937 Controller Capabilities/Features 00:07:21.937 ================================ 00:07:21.938 Vendor ID: 1b36 00:07:21.938 Subsystem Vendor ID: 1af4 00:07:21.938 Serial Number: 12340 00:07:21.938 Model Number: QEMU NVMe Ctrl 00:07:21.938 Firmware Version: 8.0.0 00:07:21.938 Recommended Arb Burst: 6 00:07:21.938 IEEE OUI Identifier: 00 54 52 00:07:21.938 Multi-path I/O 00:07:21.938 May have multiple subsystem ports: No 00:07:21.938 May have multiple controllers: No 00:07:21.938 Associated with SR-IOV VF: No 00:07:21.938 Max Data Transfer Size: 524288 00:07:21.938 Max Number of Namespaces: 256 00:07:21.938 Max Number of I/O Queues: 64 00:07:21.938 NVMe Specification Version (VS): 1.4 00:07:21.938 NVMe Specification Version (Identify): 1.4 00:07:21.938 Maximum Queue Entries: 2048 00:07:21.938 Contiguous Queues Required: Yes 00:07:21.938 Arbitration Mechanisms Supported 00:07:21.938 Weighted Round Robin: Not Supported 00:07:21.938 Vendor Specific: Not Supported 00:07:21.938 Reset Timeout: 7500 ms 00:07:21.938 Doorbell Stride: 4 bytes 00:07:21.938 NVM Subsystem Reset: Not Supported 00:07:21.938 Command Sets Supported 00:07:21.938 NVM Command Set: Supported 00:07:21.938 Boot Partition: Not Supported 00:07:21.938 Memory Page Size Minimum: 4096 bytes 00:07:21.938 Memory Page Size Maximum: 65536 bytes 00:07:21.938 Persistent Memory Region: Not Supported 00:07:21.938 Optional Asynchronous Events Supported 00:07:21.938 Namespace Attribute Notices: Supported 00:07:21.938 Firmware Activation Notices: Not Supported 00:07:21.938 ANA Change Notices: Not Supported 00:07:21.938 PLE Aggregate Log Change Notices: Not Supported 00:07:21.938 LBA Status Info Alert Notices: Not Supported 00:07:21.938 EGE Aggregate Log Change Notices: Not Supported 00:07:21.938 Normal NVM Subsystem Shutdown event: Not Supported 00:07:21.938 Zone Descriptor Change Notices: Not Supported 00:07:21.938 Discovery Log Change Notices: Not Supported 00:07:21.938 Controller Attributes 00:07:21.938 128-bit Host Identifier: Not Supported 00:07:21.938 Non-Operational Permissive Mode: Not Supported 00:07:21.938 NVM Sets: Not Supported 00:07:21.938 Read Recovery Levels: Not Supported 00:07:21.938 Endurance Groups: Not Supported 00:07:21.938 Predictable Latency Mode: Not Supported 00:07:21.938 Traffic Based Keep ALive: Not Supported 00:07:21.938 Namespace Granularity: Not Supported 00:07:21.938 SQ Associations: Not Supported 00:07:21.938 UUID List: Not Supported 00:07:21.938 Multi-Domain Subsystem: Not Supported 00:07:21.938 Fixed Capacity Management: Not Supported 00:07:21.938 Variable Capacity Management: Not Supported 00:07:21.938 Delete Endurance Group: Not Supported 00:07:21.938 Delete NVM Set: Not Supported 00:07:21.938 Extended LBA Formats Supported: Supported 00:07:21.938 Flexible Data Placement Supported: Not Supported 00:07:21.938 00:07:21.938 Controller Memory Buffer Support 00:07:21.938 ================================ 00:07:21.938 Supported: No 00:07:21.938 00:07:21.938 Persistent Memory Region Support 00:07:21.938 ================================ 00:07:21.938 Supported: No 00:07:21.938 00:07:21.938 Admin Command Set Attributes 00:07:21.938 ============================ 00:07:21.938 Security Send/Receive: Not Supported 00:07:21.938 Format NVM: Supported 00:07:21.938 Firmware Activate/Download: Not Supported 00:07:21.938 Namespace Management: Supported 00:07:21.938 Device Self-Test: Not Supported 00:07:21.938 Directives: Supported 00:07:21.938 NVMe-MI: Not Supported 00:07:21.938 Virtualization Management: Not Supported 00:07:21.938 Doorbell Buffer Config: Supported 00:07:21.938 Get LBA Status Capability: Not Supported 00:07:21.938 Command & Feature Lockdown Capability: Not Supported 00:07:21.938 Abort Command Limit: 4 00:07:21.938 Async Event Request Limit: 4 00:07:21.938 Number of Firmware Slots: N/A 00:07:21.938 Firmware Slot 1 Read-Only: N/A 00:07:21.938 Firmware Activation Without Reset: N/A 00:07:21.938 Multiple Update Detection Support: N/A 00:07:21.938 Firmware Update Granularity: No Information Provided 00:07:21.938 Per-Namespace SMART Log: Yes 00:07:21.938 Asymmetric Namespace Access Log Page: Not Supported 00:07:21.938 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:21.938 Command Effects Log Page: Supported 00:07:21.938 Get Log Page Extended Data: Supported 00:07:21.938 Telemetry Log Pages: Not Supported 00:07:21.938 Persistent Event Log Pages: Not Supported 00:07:21.938 Supported Log Pages Log Page: May Support 00:07:21.938 Commands Supported & Effects Log Page: Not Supported 00:07:21.938 Feature Identifiers & Effects Log Page:May Support 00:07:21.938 NVMe-MI Commands & Effects Log Page: May Support 00:07:21.938 Data Area 4 for Telemetry Log: Not Supported 00:07:21.938 Error Log Page Entries Supported: 1 00:07:21.938 Keep Alive: Not Supported 00:07:21.938 00:07:21.938 NVM Command Set Attributes 00:07:21.938 ========================== 00:07:21.938 Submission Queue Entry Size 00:07:21.938 Max: 64 00:07:21.938 Min: 64 00:07:21.938 Completion Queue Entry Size 00:07:21.938 Max: 16 00:07:21.938 Min: 16 00:07:21.938 Number of Namespaces: 256 00:07:21.938 Compare Command: Supported 00:07:21.938 Write Uncorrectable Command: Not Supported 00:07:21.938 Dataset Management Command: Supported 00:07:21.938 Write Zeroes Command: Supported 00:07:21.938 Set Features Save Field: Supported 00:07:21.938 Reservations: Not Supported 00:07:21.938 Timestamp: Supported 00:07:21.938 Copy: Supported 00:07:21.938 Volatile Write Cache: Present 00:07:21.938 Atomic Write Unit (Normal): 1 00:07:21.938 Atomic Write Unit (PFail): 1 00:07:21.938 Atomic Compare & Write Unit: 1 00:07:21.938 Fused Compare & Write: Not Supported 00:07:21.938 Scatter-Gather List 00:07:21.938 SGL Command Set: Supported 00:07:21.938 SGL Keyed: Not Supported 00:07:21.938 SGL Bit Bucket Descriptor: Not Supported 00:07:21.938 SGL Metadata Pointer: Not Supported 00:07:21.938 Oversized SGL: Not Supported 00:07:21.938 SGL Metadata Address: Not Supported 00:07:21.938 SGL Offset: Not Supported 00:07:21.938 Transport SGL Data Block: Not Supported 00:07:21.938 Replay Protected Memory Block: Not Supported 00:07:21.938 00:07:21.938 Firmware Slot Information 00:07:21.938 ========================= 00:07:21.938 Active slot: 1 00:07:21.938 Slot 1 Firmware Revision: 1.0 00:07:21.938 00:07:21.938 00:07:21.938 Commands Supported and Effects 00:07:21.938 ============================== 00:07:21.938 Admin Commands 00:07:21.938 -------------- 00:07:21.938 Delete I/O Submission Queue (00h): Supported 00:07:21.938 Create I/O Submission Queue (01h): Supported 00:07:21.938 Get Log Page (02h): Supported 00:07:21.938 Delete I/O Completion Queue (04h): Supported 00:07:21.938 Create I/O Completion Queue (05h): Supported 00:07:21.938 Identify (06h): Supported 00:07:21.938 Abort (08h): Supported 00:07:21.938 Set Features (09h): Supported 00:07:21.938 Get Features (0Ah): Supported 00:07:21.938 Asynchronous Event Request (0Ch): Supported 00:07:21.938 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:21.938 Directive Send (19h): Supported 00:07:21.938 Directive Receive (1Ah): Supported 00:07:21.938 Virtualization Management (1Ch): Supported 00:07:21.938 Doorbell Buffer Config (7Ch): Supported 00:07:21.938 Format NVM (80h): Supported LBA-Change 00:07:21.938 I/O Commands 00:07:21.938 ------------ 00:07:21.938 Flush (00h): Supported LBA-Change 00:07:21.938 Write (01h): Supported LBA-Change 00:07:21.938 Read (02h): Supported 00:07:21.938 Compare (05h): Supported 00:07:21.938 Write Zeroes (08h): Supported LBA-Change 00:07:21.938 Dataset Management (09h): Supported LBA-Change 00:07:21.938 Unknown (0Ch): Supported 00:07:21.938 Unknown (12h): Supported 00:07:21.938 Copy (19h): Supported LBA-Change 00:07:21.938 Unknown (1Dh): Supported LBA-Change 00:07:21.938 00:07:21.938 Error Log 00:07:21.938 ========= 00:07:21.938 00:07:21.938 Arbitration 00:07:21.938 =========== 00:07:21.938 Arbitration Burst: no limit 00:07:21.938 00:07:21.938 Power Management 00:07:21.938 ================ 00:07:21.938 Number of Power States: 1 00:07:21.938 Current Power State: Power State #0 00:07:21.938 Power State #0: 00:07:21.938 Max Power: 25.00 W 00:07:21.938 Non-Operational State: Operational 00:07:21.938 Entry Latency: 16 microseconds 00:07:21.938 Exit Latency: 4 microseconds 00:07:21.938 Relative Read Throughput: 0 00:07:21.938 Relative Read Latency: 0 00:07:21.938 Relative Write Throughput: 0 00:07:21.938 Relative Write Latency: 0 00:07:21.938 Idle Power: Not Reported 00:07:21.938 Active Power: Not Reported 00:07:21.938 Non-Operational Permissive Mode: Not Supported 00:07:21.938 00:07:21.938 Health Information 00:07:21.938 ================== 00:07:21.938 Critical Warnings: 00:07:21.938 Available Spare Space: OK 00:07:21.938 Temperature: OK 00:07:21.938 Device Reliability: OK 00:07:21.938 Read Only: No 00:07:21.938 Volatile Memory Backup: OK 00:07:21.938 Current Temperature: 323 Kelvin (50 Celsius) 00:07:21.938 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:21.938 Available Spare: 0% 00:07:21.938 Available Spare Threshold: 0% 00:07:21.938 Life Percentage Used: 0% 00:07:21.939 Data Units Read: 674 00:07:21.939 Data Units Written: 602 00:07:21.939 Host Read Commands: 38464 00:07:21.939 Host Write Commands: 38250 00:07:21.939 Controller Busy Time: 0 minutes 00:07:21.939 Power Cycles: 0 00:07:21.939 Power On Hours: 0 hours 00:07:21.939 Unsafe Shutdowns: 0 00:07:21.939 Unrecoverable Media Errors: 0 00:07:21.939 Lifetime Error Log Entries: 0 00:07:21.939 Warning Temperature Time: 0 minutes 00:07:21.939 Critical Temperature Time: 0 minutes 00:07:21.939 00:07:21.939 Number of Queues 00:07:21.939 ================ 00:07:21.939 Number of I/O Submission Queues: 64 00:07:21.939 Number of I/O Completion Queues: 64 00:07:21.939 00:07:21.939 ZNS Specific Controller Data 00:07:21.939 ============================ 00:07:21.939 Zone Append Size Limit: 0 00:07:21.939 00:07:21.939 00:07:21.939 Active Namespaces 00:07:21.939 ================= 00:07:21.939 Namespace ID:1 00:07:21.939 Error Recovery Timeout: Unlimited 00:07:21.939 Command Set Identifier: NVM (00h) 00:07:21.939 Deallocate: Supported 00:07:21.939 Deallocated/Unwritten Error: Supported 00:07:21.939 Deallocated Read Value: All 0x00 00:07:21.939 Deallocate in Write Zeroes: Not Supported 00:07:21.939 Deallocated Guard Field: 0xFFFF 00:07:21.939 Flush: Supported 00:07:21.939 Reservation: Not Supported 00:07:21.939 Metadata Transferred as: Separate Metadata Buffer 00:07:21.939 Namespace Sharing Capabilities: Private 00:07:21.939 Size (in LBAs): 1548666 (5GiB) 00:07:21.939 Capacity (in LBAs): 1548666 (5GiB) 00:07:21.939 Utilization (in LBAs): 1548666 (5GiB) 00:07:21.939 Thin Provisioning: Not Supported 00:07:21.939 Per-NS Atomic Units: No 00:07:21.939 Maximum Single Source Range Length: 128 00:07:21.939 Maximum Copy Length: 128 00:07:21.939 Maximum Source Range Count: 128 00:07:21.939 NGUID/EUI64 Never Reused: No 00:07:21.939 Namespace Write Protected: No 00:07:21.939 Number of LBA Formats: 8 00:07:21.939 Current LBA Format: LBA Format #07 00:07:21.939 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:21.939 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:21.939 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:21.939 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:21.939 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:21.939 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:21.939 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:21.939 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:21.939 00:07:21.939 NVM Specific Namespace Data 00:07:21.939 =========================== 00:07:21.939 Logical Block Storage Tag Mask: 0 00:07:21.939 Protection Information Capabilities: 00:07:21.939 16b Guard Protection Information Storage Tag Support: No 00:07:21.939 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:21.939 Storage Tag Check Read Support: No 00:07:21.939 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.939 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.939 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.939 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.939 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.939 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.939 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.939 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:21.939 22:48:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:21.939 22:48:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:22.199 ===================================================== 00:07:22.199 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:22.199 ===================================================== 00:07:22.199 Controller Capabilities/Features 00:07:22.199 ================================ 00:07:22.199 Vendor ID: 1b36 00:07:22.199 Subsystem Vendor ID: 1af4 00:07:22.199 Serial Number: 12341 00:07:22.199 Model Number: QEMU NVMe Ctrl 00:07:22.199 Firmware Version: 8.0.0 00:07:22.199 Recommended Arb Burst: 6 00:07:22.199 IEEE OUI Identifier: 00 54 52 00:07:22.199 Multi-path I/O 00:07:22.199 May have multiple subsystem ports: No 00:07:22.199 May have multiple controllers: No 00:07:22.199 Associated with SR-IOV VF: No 00:07:22.199 Max Data Transfer Size: 524288 00:07:22.199 Max Number of Namespaces: 256 00:07:22.199 Max Number of I/O Queues: 64 00:07:22.199 NVMe Specification Version (VS): 1.4 00:07:22.199 NVMe Specification Version (Identify): 1.4 00:07:22.199 Maximum Queue Entries: 2048 00:07:22.199 Contiguous Queues Required: Yes 00:07:22.199 Arbitration Mechanisms Supported 00:07:22.199 Weighted Round Robin: Not Supported 00:07:22.199 Vendor Specific: Not Supported 00:07:22.199 Reset Timeout: 7500 ms 00:07:22.199 Doorbell Stride: 4 bytes 00:07:22.199 NVM Subsystem Reset: Not Supported 00:07:22.199 Command Sets Supported 00:07:22.199 NVM Command Set: Supported 00:07:22.199 Boot Partition: Not Supported 00:07:22.200 Memory Page Size Minimum: 4096 bytes 00:07:22.200 Memory Page Size Maximum: 65536 bytes 00:07:22.200 Persistent Memory Region: Not Supported 00:07:22.200 Optional Asynchronous Events Supported 00:07:22.200 Namespace Attribute Notices: Supported 00:07:22.200 Firmware Activation Notices: Not Supported 00:07:22.200 ANA Change Notices: Not Supported 00:07:22.200 PLE Aggregate Log Change Notices: Not Supported 00:07:22.200 LBA Status Info Alert Notices: Not Supported 00:07:22.200 EGE Aggregate Log Change Notices: Not Supported 00:07:22.200 Normal NVM Subsystem Shutdown event: Not Supported 00:07:22.200 Zone Descriptor Change Notices: Not Supported 00:07:22.200 Discovery Log Change Notices: Not Supported 00:07:22.200 Controller Attributes 00:07:22.200 128-bit Host Identifier: Not Supported 00:07:22.200 Non-Operational Permissive Mode: Not Supported 00:07:22.200 NVM Sets: Not Supported 00:07:22.200 Read Recovery Levels: Not Supported 00:07:22.200 Endurance Groups: Not Supported 00:07:22.200 Predictable Latency Mode: Not Supported 00:07:22.200 Traffic Based Keep ALive: Not Supported 00:07:22.200 Namespace Granularity: Not Supported 00:07:22.200 SQ Associations: Not Supported 00:07:22.200 UUID List: Not Supported 00:07:22.200 Multi-Domain Subsystem: Not Supported 00:07:22.200 Fixed Capacity Management: Not Supported 00:07:22.200 Variable Capacity Management: Not Supported 00:07:22.200 Delete Endurance Group: Not Supported 00:07:22.200 Delete NVM Set: Not Supported 00:07:22.200 Extended LBA Formats Supported: Supported 00:07:22.200 Flexible Data Placement Supported: Not Supported 00:07:22.200 00:07:22.200 Controller Memory Buffer Support 00:07:22.200 ================================ 00:07:22.200 Supported: No 00:07:22.200 00:07:22.200 Persistent Memory Region Support 00:07:22.200 ================================ 00:07:22.200 Supported: No 00:07:22.200 00:07:22.200 Admin Command Set Attributes 00:07:22.200 ============================ 00:07:22.200 Security Send/Receive: Not Supported 00:07:22.200 Format NVM: Supported 00:07:22.200 Firmware Activate/Download: Not Supported 00:07:22.200 Namespace Management: Supported 00:07:22.200 Device Self-Test: Not Supported 00:07:22.200 Directives: Supported 00:07:22.200 NVMe-MI: Not Supported 00:07:22.200 Virtualization Management: Not Supported 00:07:22.200 Doorbell Buffer Config: Supported 00:07:22.200 Get LBA Status Capability: Not Supported 00:07:22.200 Command & Feature Lockdown Capability: Not Supported 00:07:22.200 Abort Command Limit: 4 00:07:22.200 Async Event Request Limit: 4 00:07:22.200 Number of Firmware Slots: N/A 00:07:22.200 Firmware Slot 1 Read-Only: N/A 00:07:22.200 Firmware Activation Without Reset: N/A 00:07:22.200 Multiple Update Detection Support: N/A 00:07:22.200 Firmware Update Granularity: No Information Provided 00:07:22.200 Per-Namespace SMART Log: Yes 00:07:22.200 Asymmetric Namespace Access Log Page: Not Supported 00:07:22.200 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:22.200 Command Effects Log Page: Supported 00:07:22.200 Get Log Page Extended Data: Supported 00:07:22.200 Telemetry Log Pages: Not Supported 00:07:22.200 Persistent Event Log Pages: Not Supported 00:07:22.200 Supported Log Pages Log Page: May Support 00:07:22.200 Commands Supported & Effects Log Page: Not Supported 00:07:22.200 Feature Identifiers & Effects Log Page:May Support 00:07:22.200 NVMe-MI Commands & Effects Log Page: May Support 00:07:22.200 Data Area 4 for Telemetry Log: Not Supported 00:07:22.200 Error Log Page Entries Supported: 1 00:07:22.200 Keep Alive: Not Supported 00:07:22.200 00:07:22.200 NVM Command Set Attributes 00:07:22.200 ========================== 00:07:22.200 Submission Queue Entry Size 00:07:22.200 Max: 64 00:07:22.200 Min: 64 00:07:22.200 Completion Queue Entry Size 00:07:22.200 Max: 16 00:07:22.200 Min: 16 00:07:22.200 Number of Namespaces: 256 00:07:22.200 Compare Command: Supported 00:07:22.200 Write Uncorrectable Command: Not Supported 00:07:22.200 Dataset Management Command: Supported 00:07:22.200 Write Zeroes Command: Supported 00:07:22.200 Set Features Save Field: Supported 00:07:22.200 Reservations: Not Supported 00:07:22.200 Timestamp: Supported 00:07:22.200 Copy: Supported 00:07:22.200 Volatile Write Cache: Present 00:07:22.200 Atomic Write Unit (Normal): 1 00:07:22.200 Atomic Write Unit (PFail): 1 00:07:22.200 Atomic Compare & Write Unit: 1 00:07:22.200 Fused Compare & Write: Not Supported 00:07:22.200 Scatter-Gather List 00:07:22.200 SGL Command Set: Supported 00:07:22.200 SGL Keyed: Not Supported 00:07:22.200 SGL Bit Bucket Descriptor: Not Supported 00:07:22.200 SGL Metadata Pointer: Not Supported 00:07:22.200 Oversized SGL: Not Supported 00:07:22.200 SGL Metadata Address: Not Supported 00:07:22.200 SGL Offset: Not Supported 00:07:22.200 Transport SGL Data Block: Not Supported 00:07:22.200 Replay Protected Memory Block: Not Supported 00:07:22.200 00:07:22.200 Firmware Slot Information 00:07:22.200 ========================= 00:07:22.200 Active slot: 1 00:07:22.200 Slot 1 Firmware Revision: 1.0 00:07:22.200 00:07:22.200 00:07:22.200 Commands Supported and Effects 00:07:22.200 ============================== 00:07:22.200 Admin Commands 00:07:22.200 -------------- 00:07:22.200 Delete I/O Submission Queue (00h): Supported 00:07:22.200 Create I/O Submission Queue (01h): Supported 00:07:22.200 Get Log Page (02h): Supported 00:07:22.200 Delete I/O Completion Queue (04h): Supported 00:07:22.200 Create I/O Completion Queue (05h): Supported 00:07:22.200 Identify (06h): Supported 00:07:22.200 Abort (08h): Supported 00:07:22.200 Set Features (09h): Supported 00:07:22.200 Get Features (0Ah): Supported 00:07:22.200 Asynchronous Event Request (0Ch): Supported 00:07:22.200 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:22.200 Directive Send (19h): Supported 00:07:22.200 Directive Receive (1Ah): Supported 00:07:22.200 Virtualization Management (1Ch): Supported 00:07:22.200 Doorbell Buffer Config (7Ch): Supported 00:07:22.200 Format NVM (80h): Supported LBA-Change 00:07:22.200 I/O Commands 00:07:22.200 ------------ 00:07:22.200 Flush (00h): Supported LBA-Change 00:07:22.200 Write (01h): Supported LBA-Change 00:07:22.200 Read (02h): Supported 00:07:22.200 Compare (05h): Supported 00:07:22.200 Write Zeroes (08h): Supported LBA-Change 00:07:22.200 Dataset Management (09h): Supported LBA-Change 00:07:22.200 Unknown (0Ch): Supported 00:07:22.200 Unknown (12h): Supported 00:07:22.200 Copy (19h): Supported LBA-Change 00:07:22.200 Unknown (1Dh): Supported LBA-Change 00:07:22.200 00:07:22.200 Error Log 00:07:22.200 ========= 00:07:22.200 00:07:22.200 Arbitration 00:07:22.200 =========== 00:07:22.200 Arbitration Burst: no limit 00:07:22.200 00:07:22.200 Power Management 00:07:22.200 ================ 00:07:22.200 Number of Power States: 1 00:07:22.200 Current Power State: Power State #0 00:07:22.200 Power State #0: 00:07:22.200 Max Power: 25.00 W 00:07:22.200 Non-Operational State: Operational 00:07:22.200 Entry Latency: 16 microseconds 00:07:22.200 Exit Latency: 4 microseconds 00:07:22.200 Relative Read Throughput: 0 00:07:22.200 Relative Read Latency: 0 00:07:22.200 Relative Write Throughput: 0 00:07:22.200 Relative Write Latency: 0 00:07:22.200 Idle Power: Not Reported 00:07:22.200 Active Power: Not Reported 00:07:22.200 Non-Operational Permissive Mode: Not Supported 00:07:22.200 00:07:22.200 Health Information 00:07:22.200 ================== 00:07:22.200 Critical Warnings: 00:07:22.200 Available Spare Space: OK 00:07:22.200 Temperature: OK 00:07:22.200 Device Reliability: OK 00:07:22.200 Read Only: No 00:07:22.200 Volatile Memory Backup: OK 00:07:22.200 Current Temperature: 323 Kelvin (50 Celsius) 00:07:22.200 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:22.200 Available Spare: 0% 00:07:22.200 Available Spare Threshold: 0% 00:07:22.200 Life Percentage Used: 0% 00:07:22.200 Data Units Read: 1030 00:07:22.200 Data Units Written: 907 00:07:22.200 Host Read Commands: 56990 00:07:22.200 Host Write Commands: 55934 00:07:22.200 Controller Busy Time: 0 minutes 00:07:22.200 Power Cycles: 0 00:07:22.200 Power On Hours: 0 hours 00:07:22.200 Unsafe Shutdowns: 0 00:07:22.200 Unrecoverable Media Errors: 0 00:07:22.200 Lifetime Error Log Entries: 0 00:07:22.200 Warning Temperature Time: 0 minutes 00:07:22.200 Critical Temperature Time: 0 minutes 00:07:22.200 00:07:22.200 Number of Queues 00:07:22.200 ================ 00:07:22.200 Number of I/O Submission Queues: 64 00:07:22.200 Number of I/O Completion Queues: 64 00:07:22.200 00:07:22.200 ZNS Specific Controller Data 00:07:22.200 ============================ 00:07:22.200 Zone Append Size Limit: 0 00:07:22.200 00:07:22.200 00:07:22.200 Active Namespaces 00:07:22.200 ================= 00:07:22.200 Namespace ID:1 00:07:22.200 Error Recovery Timeout: Unlimited 00:07:22.200 Command Set Identifier: NVM (00h) 00:07:22.200 Deallocate: Supported 00:07:22.200 Deallocated/Unwritten Error: Supported 00:07:22.200 Deallocated Read Value: All 0x00 00:07:22.201 Deallocate in Write Zeroes: Not Supported 00:07:22.201 Deallocated Guard Field: 0xFFFF 00:07:22.201 Flush: Supported 00:07:22.201 Reservation: Not Supported 00:07:22.201 Namespace Sharing Capabilities: Private 00:07:22.201 Size (in LBAs): 1310720 (5GiB) 00:07:22.201 Capacity (in LBAs): 1310720 (5GiB) 00:07:22.201 Utilization (in LBAs): 1310720 (5GiB) 00:07:22.201 Thin Provisioning: Not Supported 00:07:22.201 Per-NS Atomic Units: No 00:07:22.201 Maximum Single Source Range Length: 128 00:07:22.201 Maximum Copy Length: 128 00:07:22.201 Maximum Source Range Count: 128 00:07:22.201 NGUID/EUI64 Never Reused: No 00:07:22.201 Namespace Write Protected: No 00:07:22.201 Number of LBA Formats: 8 00:07:22.201 Current LBA Format: LBA Format #04 00:07:22.201 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:22.201 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:22.201 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:22.201 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:22.201 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:22.201 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:22.201 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:22.201 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:22.201 00:07:22.201 NVM Specific Namespace Data 00:07:22.201 =========================== 00:07:22.201 Logical Block Storage Tag Mask: 0 00:07:22.201 Protection Information Capabilities: 00:07:22.201 16b Guard Protection Information Storage Tag Support: No 00:07:22.201 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:22.201 Storage Tag Check Read Support: No 00:07:22.201 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.201 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.201 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.201 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.201 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.201 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.201 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.201 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.201 22:48:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:22.201 22:48:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:22.461 ===================================================== 00:07:22.461 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:22.461 ===================================================== 00:07:22.461 Controller Capabilities/Features 00:07:22.461 ================================ 00:07:22.461 Vendor ID: 1b36 00:07:22.461 Subsystem Vendor ID: 1af4 00:07:22.461 Serial Number: 12342 00:07:22.461 Model Number: QEMU NVMe Ctrl 00:07:22.461 Firmware Version: 8.0.0 00:07:22.461 Recommended Arb Burst: 6 00:07:22.461 IEEE OUI Identifier: 00 54 52 00:07:22.461 Multi-path I/O 00:07:22.461 May have multiple subsystem ports: No 00:07:22.461 May have multiple controllers: No 00:07:22.461 Associated with SR-IOV VF: No 00:07:22.461 Max Data Transfer Size: 524288 00:07:22.461 Max Number of Namespaces: 256 00:07:22.461 Max Number of I/O Queues: 64 00:07:22.461 NVMe Specification Version (VS): 1.4 00:07:22.461 NVMe Specification Version (Identify): 1.4 00:07:22.461 Maximum Queue Entries: 2048 00:07:22.461 Contiguous Queues Required: Yes 00:07:22.461 Arbitration Mechanisms Supported 00:07:22.461 Weighted Round Robin: Not Supported 00:07:22.461 Vendor Specific: Not Supported 00:07:22.461 Reset Timeout: 7500 ms 00:07:22.461 Doorbell Stride: 4 bytes 00:07:22.461 NVM Subsystem Reset: Not Supported 00:07:22.461 Command Sets Supported 00:07:22.461 NVM Command Set: Supported 00:07:22.461 Boot Partition: Not Supported 00:07:22.461 Memory Page Size Minimum: 4096 bytes 00:07:22.461 Memory Page Size Maximum: 65536 bytes 00:07:22.461 Persistent Memory Region: Not Supported 00:07:22.461 Optional Asynchronous Events Supported 00:07:22.461 Namespace Attribute Notices: Supported 00:07:22.461 Firmware Activation Notices: Not Supported 00:07:22.461 ANA Change Notices: Not Supported 00:07:22.461 PLE Aggregate Log Change Notices: Not Supported 00:07:22.461 LBA Status Info Alert Notices: Not Supported 00:07:22.461 EGE Aggregate Log Change Notices: Not Supported 00:07:22.461 Normal NVM Subsystem Shutdown event: Not Supported 00:07:22.461 Zone Descriptor Change Notices: Not Supported 00:07:22.461 Discovery Log Change Notices: Not Supported 00:07:22.461 Controller Attributes 00:07:22.461 128-bit Host Identifier: Not Supported 00:07:22.461 Non-Operational Permissive Mode: Not Supported 00:07:22.461 NVM Sets: Not Supported 00:07:22.461 Read Recovery Levels: Not Supported 00:07:22.461 Endurance Groups: Not Supported 00:07:22.461 Predictable Latency Mode: Not Supported 00:07:22.461 Traffic Based Keep ALive: Not Supported 00:07:22.461 Namespace Granularity: Not Supported 00:07:22.461 SQ Associations: Not Supported 00:07:22.461 UUID List: Not Supported 00:07:22.461 Multi-Domain Subsystem: Not Supported 00:07:22.461 Fixed Capacity Management: Not Supported 00:07:22.461 Variable Capacity Management: Not Supported 00:07:22.461 Delete Endurance Group: Not Supported 00:07:22.461 Delete NVM Set: Not Supported 00:07:22.461 Extended LBA Formats Supported: Supported 00:07:22.461 Flexible Data Placement Supported: Not Supported 00:07:22.461 00:07:22.461 Controller Memory Buffer Support 00:07:22.461 ================================ 00:07:22.461 Supported: No 00:07:22.461 00:07:22.461 Persistent Memory Region Support 00:07:22.461 ================================ 00:07:22.461 Supported: No 00:07:22.461 00:07:22.461 Admin Command Set Attributes 00:07:22.461 ============================ 00:07:22.461 Security Send/Receive: Not Supported 00:07:22.461 Format NVM: Supported 00:07:22.461 Firmware Activate/Download: Not Supported 00:07:22.461 Namespace Management: Supported 00:07:22.461 Device Self-Test: Not Supported 00:07:22.461 Directives: Supported 00:07:22.461 NVMe-MI: Not Supported 00:07:22.461 Virtualization Management: Not Supported 00:07:22.461 Doorbell Buffer Config: Supported 00:07:22.461 Get LBA Status Capability: Not Supported 00:07:22.461 Command & Feature Lockdown Capability: Not Supported 00:07:22.461 Abort Command Limit: 4 00:07:22.461 Async Event Request Limit: 4 00:07:22.461 Number of Firmware Slots: N/A 00:07:22.461 Firmware Slot 1 Read-Only: N/A 00:07:22.461 Firmware Activation Without Reset: N/A 00:07:22.461 Multiple Update Detection Support: N/A 00:07:22.461 Firmware Update Granularity: No Information Provided 00:07:22.461 Per-Namespace SMART Log: Yes 00:07:22.461 Asymmetric Namespace Access Log Page: Not Supported 00:07:22.461 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:22.461 Command Effects Log Page: Supported 00:07:22.461 Get Log Page Extended Data: Supported 00:07:22.461 Telemetry Log Pages: Not Supported 00:07:22.461 Persistent Event Log Pages: Not Supported 00:07:22.461 Supported Log Pages Log Page: May Support 00:07:22.461 Commands Supported & Effects Log Page: Not Supported 00:07:22.461 Feature Identifiers & Effects Log Page:May Support 00:07:22.461 NVMe-MI Commands & Effects Log Page: May Support 00:07:22.461 Data Area 4 for Telemetry Log: Not Supported 00:07:22.461 Error Log Page Entries Supported: 1 00:07:22.461 Keep Alive: Not Supported 00:07:22.461 00:07:22.461 NVM Command Set Attributes 00:07:22.461 ========================== 00:07:22.461 Submission Queue Entry Size 00:07:22.461 Max: 64 00:07:22.461 Min: 64 00:07:22.461 Completion Queue Entry Size 00:07:22.461 Max: 16 00:07:22.461 Min: 16 00:07:22.461 Number of Namespaces: 256 00:07:22.461 Compare Command: Supported 00:07:22.461 Write Uncorrectable Command: Not Supported 00:07:22.461 Dataset Management Command: Supported 00:07:22.461 Write Zeroes Command: Supported 00:07:22.461 Set Features Save Field: Supported 00:07:22.461 Reservations: Not Supported 00:07:22.461 Timestamp: Supported 00:07:22.461 Copy: Supported 00:07:22.461 Volatile Write Cache: Present 00:07:22.461 Atomic Write Unit (Normal): 1 00:07:22.461 Atomic Write Unit (PFail): 1 00:07:22.461 Atomic Compare & Write Unit: 1 00:07:22.461 Fused Compare & Write: Not Supported 00:07:22.461 Scatter-Gather List 00:07:22.461 SGL Command Set: Supported 00:07:22.461 SGL Keyed: Not Supported 00:07:22.461 SGL Bit Bucket Descriptor: Not Supported 00:07:22.461 SGL Metadata Pointer: Not Supported 00:07:22.461 Oversized SGL: Not Supported 00:07:22.461 SGL Metadata Address: Not Supported 00:07:22.461 SGL Offset: Not Supported 00:07:22.461 Transport SGL Data Block: Not Supported 00:07:22.461 Replay Protected Memory Block: Not Supported 00:07:22.461 00:07:22.461 Firmware Slot Information 00:07:22.461 ========================= 00:07:22.461 Active slot: 1 00:07:22.461 Slot 1 Firmware Revision: 1.0 00:07:22.461 00:07:22.461 00:07:22.461 Commands Supported and Effects 00:07:22.461 ============================== 00:07:22.461 Admin Commands 00:07:22.461 -------------- 00:07:22.461 Delete I/O Submission Queue (00h): Supported 00:07:22.461 Create I/O Submission Queue (01h): Supported 00:07:22.461 Get Log Page (02h): Supported 00:07:22.461 Delete I/O Completion Queue (04h): Supported 00:07:22.461 Create I/O Completion Queue (05h): Supported 00:07:22.461 Identify (06h): Supported 00:07:22.461 Abort (08h): Supported 00:07:22.461 Set Features (09h): Supported 00:07:22.461 Get Features (0Ah): Supported 00:07:22.461 Asynchronous Event Request (0Ch): Supported 00:07:22.461 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:22.461 Directive Send (19h): Supported 00:07:22.461 Directive Receive (1Ah): Supported 00:07:22.461 Virtualization Management (1Ch): Supported 00:07:22.461 Doorbell Buffer Config (7Ch): Supported 00:07:22.461 Format NVM (80h): Supported LBA-Change 00:07:22.461 I/O Commands 00:07:22.461 ------------ 00:07:22.461 Flush (00h): Supported LBA-Change 00:07:22.461 Write (01h): Supported LBA-Change 00:07:22.461 Read (02h): Supported 00:07:22.461 Compare (05h): Supported 00:07:22.461 Write Zeroes (08h): Supported LBA-Change 00:07:22.461 Dataset Management (09h): Supported LBA-Change 00:07:22.461 Unknown (0Ch): Supported 00:07:22.461 Unknown (12h): Supported 00:07:22.461 Copy (19h): Supported LBA-Change 00:07:22.461 Unknown (1Dh): Supported LBA-Change 00:07:22.461 00:07:22.461 Error Log 00:07:22.461 ========= 00:07:22.461 00:07:22.461 Arbitration 00:07:22.461 =========== 00:07:22.461 Arbitration Burst: no limit 00:07:22.461 00:07:22.461 Power Management 00:07:22.461 ================ 00:07:22.461 Number of Power States: 1 00:07:22.461 Current Power State: Power State #0 00:07:22.461 Power State #0: 00:07:22.461 Max Power: 25.00 W 00:07:22.461 Non-Operational State: Operational 00:07:22.461 Entry Latency: 16 microseconds 00:07:22.461 Exit Latency: 4 microseconds 00:07:22.461 Relative Read Throughput: 0 00:07:22.461 Relative Read Latency: 0 00:07:22.461 Relative Write Throughput: 0 00:07:22.461 Relative Write Latency: 0 00:07:22.461 Idle Power: Not Reported 00:07:22.461 Active Power: Not Reported 00:07:22.462 Non-Operational Permissive Mode: Not Supported 00:07:22.462 00:07:22.462 Health Information 00:07:22.462 ================== 00:07:22.462 Critical Warnings: 00:07:22.462 Available Spare Space: OK 00:07:22.462 Temperature: OK 00:07:22.462 Device Reliability: OK 00:07:22.462 Read Only: No 00:07:22.462 Volatile Memory Backup: OK 00:07:22.462 Current Temperature: 323 Kelvin (50 Celsius) 00:07:22.462 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:22.462 Available Spare: 0% 00:07:22.462 Available Spare Threshold: 0% 00:07:22.462 Life Percentage Used: 0% 00:07:22.462 Data Units Read: 2248 00:07:22.462 Data Units Written: 2035 00:07:22.462 Host Read Commands: 118183 00:07:22.462 Host Write Commands: 116452 00:07:22.462 Controller Busy Time: 0 minutes 00:07:22.462 Power Cycles: 0 00:07:22.462 Power On Hours: 0 hours 00:07:22.462 Unsafe Shutdowns: 0 00:07:22.462 Unrecoverable Media Errors: 0 00:07:22.462 Lifetime Error Log Entries: 0 00:07:22.462 Warning Temperature Time: 0 minutes 00:07:22.462 Critical Temperature Time: 0 minutes 00:07:22.462 00:07:22.462 Number of Queues 00:07:22.462 ================ 00:07:22.462 Number of I/O Submission Queues: 64 00:07:22.462 Number of I/O Completion Queues: 64 00:07:22.462 00:07:22.462 ZNS Specific Controller Data 00:07:22.462 ============================ 00:07:22.462 Zone Append Size Limit: 0 00:07:22.462 00:07:22.462 00:07:22.462 Active Namespaces 00:07:22.462 ================= 00:07:22.462 Namespace ID:1 00:07:22.462 Error Recovery Timeout: Unlimited 00:07:22.462 Command Set Identifier: NVM (00h) 00:07:22.462 Deallocate: Supported 00:07:22.462 Deallocated/Unwritten Error: Supported 00:07:22.462 Deallocated Read Value: All 0x00 00:07:22.462 Deallocate in Write Zeroes: Not Supported 00:07:22.462 Deallocated Guard Field: 0xFFFF 00:07:22.462 Flush: Supported 00:07:22.462 Reservation: Not Supported 00:07:22.462 Namespace Sharing Capabilities: Private 00:07:22.462 Size (in LBAs): 1048576 (4GiB) 00:07:22.462 Capacity (in LBAs): 1048576 (4GiB) 00:07:22.462 Utilization (in LBAs): 1048576 (4GiB) 00:07:22.462 Thin Provisioning: Not Supported 00:07:22.462 Per-NS Atomic Units: No 00:07:22.462 Maximum Single Source Range Length: 128 00:07:22.462 Maximum Copy Length: 128 00:07:22.462 Maximum Source Range Count: 128 00:07:22.462 NGUID/EUI64 Never Reused: No 00:07:22.462 Namespace Write Protected: No 00:07:22.462 Number of LBA Formats: 8 00:07:22.462 Current LBA Format: LBA Format #04 00:07:22.462 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:22.462 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:22.462 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:22.462 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:22.462 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:22.462 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:22.462 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:22.462 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:22.462 00:07:22.462 NVM Specific Namespace Data 00:07:22.462 =========================== 00:07:22.462 Logical Block Storage Tag Mask: 0 00:07:22.462 Protection Information Capabilities: 00:07:22.462 16b Guard Protection Information Storage Tag Support: No 00:07:22.462 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:22.462 Storage Tag Check Read Support: No 00:07:22.462 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Namespace ID:2 00:07:22.462 Error Recovery Timeout: Unlimited 00:07:22.462 Command Set Identifier: NVM (00h) 00:07:22.462 Deallocate: Supported 00:07:22.462 Deallocated/Unwritten Error: Supported 00:07:22.462 Deallocated Read Value: All 0x00 00:07:22.462 Deallocate in Write Zeroes: Not Supported 00:07:22.462 Deallocated Guard Field: 0xFFFF 00:07:22.462 Flush: Supported 00:07:22.462 Reservation: Not Supported 00:07:22.462 Namespace Sharing Capabilities: Private 00:07:22.462 Size (in LBAs): 1048576 (4GiB) 00:07:22.462 Capacity (in LBAs): 1048576 (4GiB) 00:07:22.462 Utilization (in LBAs): 1048576 (4GiB) 00:07:22.462 Thin Provisioning: Not Supported 00:07:22.462 Per-NS Atomic Units: No 00:07:22.462 Maximum Single Source Range Length: 128 00:07:22.462 Maximum Copy Length: 128 00:07:22.462 Maximum Source Range Count: 128 00:07:22.462 NGUID/EUI64 Never Reused: No 00:07:22.462 Namespace Write Protected: No 00:07:22.462 Number of LBA Formats: 8 00:07:22.462 Current LBA Format: LBA Format #04 00:07:22.462 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:22.462 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:22.462 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:22.462 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:22.462 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:22.462 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:22.462 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:22.462 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:22.462 00:07:22.462 NVM Specific Namespace Data 00:07:22.462 =========================== 00:07:22.462 Logical Block Storage Tag Mask: 0 00:07:22.462 Protection Information Capabilities: 00:07:22.462 16b Guard Protection Information Storage Tag Support: No 00:07:22.462 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:22.462 Storage Tag Check Read Support: No 00:07:22.462 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Namespace ID:3 00:07:22.462 Error Recovery Timeout: Unlimited 00:07:22.462 Command Set Identifier: NVM (00h) 00:07:22.462 Deallocate: Supported 00:07:22.462 Deallocated/Unwritten Error: Supported 00:07:22.462 Deallocated Read Value: All 0x00 00:07:22.462 Deallocate in Write Zeroes: Not Supported 00:07:22.462 Deallocated Guard Field: 0xFFFF 00:07:22.462 Flush: Supported 00:07:22.462 Reservation: Not Supported 00:07:22.462 Namespace Sharing Capabilities: Private 00:07:22.462 Size (in LBAs): 1048576 (4GiB) 00:07:22.462 Capacity (in LBAs): 1048576 (4GiB) 00:07:22.462 Utilization (in LBAs): 1048576 (4GiB) 00:07:22.462 Thin Provisioning: Not Supported 00:07:22.462 Per-NS Atomic Units: No 00:07:22.462 Maximum Single Source Range Length: 128 00:07:22.462 Maximum Copy Length: 128 00:07:22.462 Maximum Source Range Count: 128 00:07:22.462 NGUID/EUI64 Never Reused: No 00:07:22.462 Namespace Write Protected: No 00:07:22.462 Number of LBA Formats: 8 00:07:22.462 Current LBA Format: LBA Format #04 00:07:22.462 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:22.462 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:22.462 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:22.462 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:22.462 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:22.462 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:22.462 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:22.462 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:22.462 00:07:22.462 NVM Specific Namespace Data 00:07:22.462 =========================== 00:07:22.462 Logical Block Storage Tag Mask: 0 00:07:22.462 Protection Information Capabilities: 00:07:22.462 16b Guard Protection Information Storage Tag Support: No 00:07:22.462 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:22.462 Storage Tag Check Read Support: No 00:07:22.462 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.462 22:48:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:22.462 22:48:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:22.721 ===================================================== 00:07:22.721 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:22.721 ===================================================== 00:07:22.721 Controller Capabilities/Features 00:07:22.721 ================================ 00:07:22.721 Vendor ID: 1b36 00:07:22.721 Subsystem Vendor ID: 1af4 00:07:22.721 Serial Number: 12343 00:07:22.721 Model Number: QEMU NVMe Ctrl 00:07:22.721 Firmware Version: 8.0.0 00:07:22.721 Recommended Arb Burst: 6 00:07:22.721 IEEE OUI Identifier: 00 54 52 00:07:22.721 Multi-path I/O 00:07:22.721 May have multiple subsystem ports: No 00:07:22.721 May have multiple controllers: Yes 00:07:22.721 Associated with SR-IOV VF: No 00:07:22.721 Max Data Transfer Size: 524288 00:07:22.721 Max Number of Namespaces: 256 00:07:22.721 Max Number of I/O Queues: 64 00:07:22.721 NVMe Specification Version (VS): 1.4 00:07:22.721 NVMe Specification Version (Identify): 1.4 00:07:22.721 Maximum Queue Entries: 2048 00:07:22.721 Contiguous Queues Required: Yes 00:07:22.721 Arbitration Mechanisms Supported 00:07:22.721 Weighted Round Robin: Not Supported 00:07:22.721 Vendor Specific: Not Supported 00:07:22.721 Reset Timeout: 7500 ms 00:07:22.721 Doorbell Stride: 4 bytes 00:07:22.721 NVM Subsystem Reset: Not Supported 00:07:22.721 Command Sets Supported 00:07:22.721 NVM Command Set: Supported 00:07:22.721 Boot Partition: Not Supported 00:07:22.721 Memory Page Size Minimum: 4096 bytes 00:07:22.721 Memory Page Size Maximum: 65536 bytes 00:07:22.721 Persistent Memory Region: Not Supported 00:07:22.721 Optional Asynchronous Events Supported 00:07:22.721 Namespace Attribute Notices: Supported 00:07:22.721 Firmware Activation Notices: Not Supported 00:07:22.721 ANA Change Notices: Not Supported 00:07:22.721 PLE Aggregate Log Change Notices: Not Supported 00:07:22.721 LBA Status Info Alert Notices: Not Supported 00:07:22.721 EGE Aggregate Log Change Notices: Not Supported 00:07:22.721 Normal NVM Subsystem Shutdown event: Not Supported 00:07:22.721 Zone Descriptor Change Notices: Not Supported 00:07:22.721 Discovery Log Change Notices: Not Supported 00:07:22.721 Controller Attributes 00:07:22.721 128-bit Host Identifier: Not Supported 00:07:22.721 Non-Operational Permissive Mode: Not Supported 00:07:22.721 NVM Sets: Not Supported 00:07:22.721 Read Recovery Levels: Not Supported 00:07:22.721 Endurance Groups: Supported 00:07:22.721 Predictable Latency Mode: Not Supported 00:07:22.721 Traffic Based Keep ALive: Not Supported 00:07:22.721 Namespace Granularity: Not Supported 00:07:22.721 SQ Associations: Not Supported 00:07:22.721 UUID List: Not Supported 00:07:22.721 Multi-Domain Subsystem: Not Supported 00:07:22.721 Fixed Capacity Management: Not Supported 00:07:22.721 Variable Capacity Management: Not Supported 00:07:22.721 Delete Endurance Group: Not Supported 00:07:22.721 Delete NVM Set: Not Supported 00:07:22.721 Extended LBA Formats Supported: Supported 00:07:22.721 Flexible Data Placement Supported: Supported 00:07:22.721 00:07:22.721 Controller Memory Buffer Support 00:07:22.721 ================================ 00:07:22.721 Supported: No 00:07:22.721 00:07:22.721 Persistent Memory Region Support 00:07:22.721 ================================ 00:07:22.721 Supported: No 00:07:22.721 00:07:22.721 Admin Command Set Attributes 00:07:22.721 ============================ 00:07:22.721 Security Send/Receive: Not Supported 00:07:22.721 Format NVM: Supported 00:07:22.721 Firmware Activate/Download: Not Supported 00:07:22.721 Namespace Management: Supported 00:07:22.721 Device Self-Test: Not Supported 00:07:22.721 Directives: Supported 00:07:22.721 NVMe-MI: Not Supported 00:07:22.721 Virtualization Management: Not Supported 00:07:22.721 Doorbell Buffer Config: Supported 00:07:22.721 Get LBA Status Capability: Not Supported 00:07:22.721 Command & Feature Lockdown Capability: Not Supported 00:07:22.721 Abort Command Limit: 4 00:07:22.721 Async Event Request Limit: 4 00:07:22.721 Number of Firmware Slots: N/A 00:07:22.721 Firmware Slot 1 Read-Only: N/A 00:07:22.721 Firmware Activation Without Reset: N/A 00:07:22.721 Multiple Update Detection Support: N/A 00:07:22.721 Firmware Update Granularity: No Information Provided 00:07:22.721 Per-Namespace SMART Log: Yes 00:07:22.721 Asymmetric Namespace Access Log Page: Not Supported 00:07:22.721 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:22.721 Command Effects Log Page: Supported 00:07:22.721 Get Log Page Extended Data: Supported 00:07:22.721 Telemetry Log Pages: Not Supported 00:07:22.721 Persistent Event Log Pages: Not Supported 00:07:22.721 Supported Log Pages Log Page: May Support 00:07:22.721 Commands Supported & Effects Log Page: Not Supported 00:07:22.721 Feature Identifiers & Effects Log Page:May Support 00:07:22.721 NVMe-MI Commands & Effects Log Page: May Support 00:07:22.721 Data Area 4 for Telemetry Log: Not Supported 00:07:22.721 Error Log Page Entries Supported: 1 00:07:22.721 Keep Alive: Not Supported 00:07:22.721 00:07:22.721 NVM Command Set Attributes 00:07:22.721 ========================== 00:07:22.721 Submission Queue Entry Size 00:07:22.721 Max: 64 00:07:22.721 Min: 64 00:07:22.721 Completion Queue Entry Size 00:07:22.721 Max: 16 00:07:22.721 Min: 16 00:07:22.721 Number of Namespaces: 256 00:07:22.721 Compare Command: Supported 00:07:22.721 Write Uncorrectable Command: Not Supported 00:07:22.721 Dataset Management Command: Supported 00:07:22.721 Write Zeroes Command: Supported 00:07:22.721 Set Features Save Field: Supported 00:07:22.721 Reservations: Not Supported 00:07:22.721 Timestamp: Supported 00:07:22.721 Copy: Supported 00:07:22.721 Volatile Write Cache: Present 00:07:22.721 Atomic Write Unit (Normal): 1 00:07:22.721 Atomic Write Unit (PFail): 1 00:07:22.721 Atomic Compare & Write Unit: 1 00:07:22.721 Fused Compare & Write: Not Supported 00:07:22.721 Scatter-Gather List 00:07:22.721 SGL Command Set: Supported 00:07:22.721 SGL Keyed: Not Supported 00:07:22.721 SGL Bit Bucket Descriptor: Not Supported 00:07:22.721 SGL Metadata Pointer: Not Supported 00:07:22.721 Oversized SGL: Not Supported 00:07:22.721 SGL Metadata Address: Not Supported 00:07:22.722 SGL Offset: Not Supported 00:07:22.722 Transport SGL Data Block: Not Supported 00:07:22.722 Replay Protected Memory Block: Not Supported 00:07:22.722 00:07:22.722 Firmware Slot Information 00:07:22.722 ========================= 00:07:22.722 Active slot: 1 00:07:22.722 Slot 1 Firmware Revision: 1.0 00:07:22.722 00:07:22.722 00:07:22.722 Commands Supported and Effects 00:07:22.722 ============================== 00:07:22.722 Admin Commands 00:07:22.722 -------------- 00:07:22.722 Delete I/O Submission Queue (00h): Supported 00:07:22.722 Create I/O Submission Queue (01h): Supported 00:07:22.722 Get Log Page (02h): Supported 00:07:22.722 Delete I/O Completion Queue (04h): Supported 00:07:22.722 Create I/O Completion Queue (05h): Supported 00:07:22.722 Identify (06h): Supported 00:07:22.722 Abort (08h): Supported 00:07:22.722 Set Features (09h): Supported 00:07:22.722 Get Features (0Ah): Supported 00:07:22.722 Asynchronous Event Request (0Ch): Supported 00:07:22.722 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:22.722 Directive Send (19h): Supported 00:07:22.722 Directive Receive (1Ah): Supported 00:07:22.722 Virtualization Management (1Ch): Supported 00:07:22.722 Doorbell Buffer Config (7Ch): Supported 00:07:22.722 Format NVM (80h): Supported LBA-Change 00:07:22.722 I/O Commands 00:07:22.722 ------------ 00:07:22.722 Flush (00h): Supported LBA-Change 00:07:22.722 Write (01h): Supported LBA-Change 00:07:22.722 Read (02h): Supported 00:07:22.722 Compare (05h): Supported 00:07:22.722 Write Zeroes (08h): Supported LBA-Change 00:07:22.722 Dataset Management (09h): Supported LBA-Change 00:07:22.722 Unknown (0Ch): Supported 00:07:22.722 Unknown (12h): Supported 00:07:22.722 Copy (19h): Supported LBA-Change 00:07:22.722 Unknown (1Dh): Supported LBA-Change 00:07:22.722 00:07:22.722 Error Log 00:07:22.722 ========= 00:07:22.722 00:07:22.722 Arbitration 00:07:22.722 =========== 00:07:22.722 Arbitration Burst: no limit 00:07:22.722 00:07:22.722 Power Management 00:07:22.722 ================ 00:07:22.722 Number of Power States: 1 00:07:22.722 Current Power State: Power State #0 00:07:22.722 Power State #0: 00:07:22.722 Max Power: 25.00 W 00:07:22.722 Non-Operational State: Operational 00:07:22.722 Entry Latency: 16 microseconds 00:07:22.722 Exit Latency: 4 microseconds 00:07:22.722 Relative Read Throughput: 0 00:07:22.722 Relative Read Latency: 0 00:07:22.722 Relative Write Throughput: 0 00:07:22.722 Relative Write Latency: 0 00:07:22.722 Idle Power: Not Reported 00:07:22.722 Active Power: Not Reported 00:07:22.722 Non-Operational Permissive Mode: Not Supported 00:07:22.722 00:07:22.722 Health Information 00:07:22.722 ================== 00:07:22.722 Critical Warnings: 00:07:22.722 Available Spare Space: OK 00:07:22.722 Temperature: OK 00:07:22.722 Device Reliability: OK 00:07:22.722 Read Only: No 00:07:22.722 Volatile Memory Backup: OK 00:07:22.722 Current Temperature: 323 Kelvin (50 Celsius) 00:07:22.722 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:22.722 Available Spare: 0% 00:07:22.722 Available Spare Threshold: 0% 00:07:22.722 Life Percentage Used: 0% 00:07:22.722 Data Units Read: 966 00:07:22.722 Data Units Written: 896 00:07:22.722 Host Read Commands: 41168 00:07:22.722 Host Write Commands: 40591 00:07:22.722 Controller Busy Time: 0 minutes 00:07:22.722 Power Cycles: 0 00:07:22.722 Power On Hours: 0 hours 00:07:22.722 Unsafe Shutdowns: 0 00:07:22.722 Unrecoverable Media Errors: 0 00:07:22.722 Lifetime Error Log Entries: 0 00:07:22.722 Warning Temperature Time: 0 minutes 00:07:22.722 Critical Temperature Time: 0 minutes 00:07:22.722 00:07:22.722 Number of Queues 00:07:22.722 ================ 00:07:22.722 Number of I/O Submission Queues: 64 00:07:22.722 Number of I/O Completion Queues: 64 00:07:22.722 00:07:22.722 ZNS Specific Controller Data 00:07:22.722 ============================ 00:07:22.722 Zone Append Size Limit: 0 00:07:22.722 00:07:22.722 00:07:22.722 Active Namespaces 00:07:22.722 ================= 00:07:22.722 Namespace ID:1 00:07:22.722 Error Recovery Timeout: Unlimited 00:07:22.722 Command Set Identifier: NVM (00h) 00:07:22.722 Deallocate: Supported 00:07:22.722 Deallocated/Unwritten Error: Supported 00:07:22.722 Deallocated Read Value: All 0x00 00:07:22.722 Deallocate in Write Zeroes: Not Supported 00:07:22.722 Deallocated Guard Field: 0xFFFF 00:07:22.722 Flush: Supported 00:07:22.722 Reservation: Not Supported 00:07:22.722 Namespace Sharing Capabilities: Multiple Controllers 00:07:22.722 Size (in LBAs): 262144 (1GiB) 00:07:22.722 Capacity (in LBAs): 262144 (1GiB) 00:07:22.722 Utilization (in LBAs): 262144 (1GiB) 00:07:22.722 Thin Provisioning: Not Supported 00:07:22.722 Per-NS Atomic Units: No 00:07:22.722 Maximum Single Source Range Length: 128 00:07:22.722 Maximum Copy Length: 128 00:07:22.722 Maximum Source Range Count: 128 00:07:22.722 NGUID/EUI64 Never Reused: No 00:07:22.722 Namespace Write Protected: No 00:07:22.722 Endurance group ID: 1 00:07:22.722 Number of LBA Formats: 8 00:07:22.722 Current LBA Format: LBA Format #04 00:07:22.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:22.722 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:22.722 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:22.722 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:22.722 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:22.722 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:22.722 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:22.722 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:22.722 00:07:22.722 Get Feature FDP: 00:07:22.722 ================ 00:07:22.722 Enabled: Yes 00:07:22.722 FDP configuration index: 0 00:07:22.722 00:07:22.722 FDP configurations log page 00:07:22.722 =========================== 00:07:22.722 Number of FDP configurations: 1 00:07:22.722 Version: 0 00:07:22.722 Size: 112 00:07:22.722 FDP Configuration Descriptor: 0 00:07:22.722 Descriptor Size: 96 00:07:22.722 Reclaim Group Identifier format: 2 00:07:22.722 FDP Volatile Write Cache: Not Present 00:07:22.722 FDP Configuration: Valid 00:07:22.722 Vendor Specific Size: 0 00:07:22.722 Number of Reclaim Groups: 2 00:07:22.722 Number of Recalim Unit Handles: 8 00:07:22.722 Max Placement Identifiers: 128 00:07:22.722 Number of Namespaces Suppprted: 256 00:07:22.722 Reclaim unit Nominal Size: 6000000 bytes 00:07:22.722 Estimated Reclaim Unit Time Limit: Not Reported 00:07:22.722 RUH Desc #000: RUH Type: Initially Isolated 00:07:22.722 RUH Desc #001: RUH Type: Initially Isolated 00:07:22.722 RUH Desc #002: RUH Type: Initially Isolated 00:07:22.722 RUH Desc #003: RUH Type: Initially Isolated 00:07:22.722 RUH Desc #004: RUH Type: Initially Isolated 00:07:22.722 RUH Desc #005: RUH Type: Initially Isolated 00:07:22.722 RUH Desc #006: RUH Type: Initially Isolated 00:07:22.722 RUH Desc #007: RUH Type: Initially Isolated 00:07:22.722 00:07:22.722 FDP reclaim unit handle usage log page 00:07:22.722 ====================================== 00:07:22.722 Number of Reclaim Unit Handles: 8 00:07:22.722 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:22.722 RUH Usage Desc #001: RUH Attributes: Unused 00:07:22.722 RUH Usage Desc #002: RUH Attributes: Unused 00:07:22.722 RUH Usage Desc #003: RUH Attributes: Unused 00:07:22.722 RUH Usage Desc #004: RUH Attributes: Unused 00:07:22.722 RUH Usage Desc #005: RUH Attributes: Unused 00:07:22.722 RUH Usage Desc #006: RUH Attributes: Unused 00:07:22.722 RUH Usage Desc #007: RUH Attributes: Unused 00:07:22.722 00:07:22.722 FDP statistics log page 00:07:22.722 ======================= 00:07:22.722 Host bytes with metadata written: 554737664 00:07:22.722 Media bytes with metadata written: 554815488 00:07:22.722 Media bytes erased: 0 00:07:22.722 00:07:22.722 FDP events log page 00:07:22.722 =================== 00:07:22.722 Number of FDP events: 0 00:07:22.722 00:07:22.722 NVM Specific Namespace Data 00:07:22.722 =========================== 00:07:22.722 Logical Block Storage Tag Mask: 0 00:07:22.722 Protection Information Capabilities: 00:07:22.722 16b Guard Protection Information Storage Tag Support: No 00:07:22.722 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:22.722 Storage Tag Check Read Support: No 00:07:22.722 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.722 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.722 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.722 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.722 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.722 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.722 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.722 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:22.722 00:07:22.722 real 0m1.201s 00:07:22.722 user 0m0.431s 00:07:22.722 sys 0m0.558s 00:07:22.722 22:48:01 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.722 22:48:01 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:22.722 ************************************ 00:07:22.722 END TEST nvme_identify 00:07:22.722 ************************************ 00:07:22.722 22:48:01 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:22.723 22:48:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.723 22:48:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.723 22:48:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.723 ************************************ 00:07:22.723 START TEST nvme_perf 00:07:22.723 ************************************ 00:07:22.723 22:48:01 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:22.723 22:48:01 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:24.106 Initializing NVMe Controllers 00:07:24.106 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:24.106 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:24.106 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:24.106 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:24.106 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:24.106 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:24.106 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:24.106 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:24.106 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:24.106 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:24.106 Initialization complete. Launching workers. 00:07:24.106 ======================================================== 00:07:24.106 Latency(us) 00:07:24.106 Device Information : IOPS MiB/s Average min max 00:07:24.106 PCIE (0000:00:13.0) NSID 1 from core 0: 15004.47 175.83 8542.73 6110.15 28625.20 00:07:24.106 PCIE (0000:00:10.0) NSID 1 from core 0: 15004.47 175.83 8527.98 6016.10 27088.26 00:07:24.106 PCIE (0000:00:11.0) NSID 1 from core 0: 15004.47 175.83 8514.27 6107.93 25256.71 00:07:24.106 PCIE (0000:00:12.0) NSID 1 from core 0: 15004.47 175.83 8499.46 6103.47 23873.18 00:07:24.106 PCIE (0000:00:12.0) NSID 2 from core 0: 15004.47 175.83 8484.33 6098.21 22063.79 00:07:24.106 PCIE (0000:00:12.0) NSID 3 from core 0: 15004.47 175.83 8469.33 6121.96 20137.72 00:07:24.106 ======================================================== 00:07:24.106 Total : 90026.82 1055.00 8506.35 6016.10 28625.20 00:07:24.106 00:07:24.106 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:24.106 ================================================================================= 00:07:24.106 1.00000% : 6225.920us 00:07:24.106 10.00000% : 6503.188us 00:07:24.106 25.00000% : 6805.662us 00:07:24.106 50.00000% : 7763.495us 00:07:24.106 75.00000% : 9679.163us 00:07:24.106 90.00000% : 11241.945us 00:07:24.106 95.00000% : 12703.902us 00:07:24.106 98.00000% : 13913.797us 00:07:24.106 99.00000% : 15728.640us 00:07:24.106 99.50000% : 22080.591us 00:07:24.106 99.90000% : 28432.542us 00:07:24.106 99.99000% : 28634.191us 00:07:24.106 99.99900% : 28634.191us 00:07:24.106 99.99990% : 28634.191us 00:07:24.106 99.99999% : 28634.191us 00:07:24.106 00:07:24.106 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:24.106 ================================================================================= 00:07:24.106 1.00000% : 6125.095us 00:07:24.106 10.00000% : 6452.775us 00:07:24.106 25.00000% : 6805.662us 00:07:24.106 50.00000% : 7713.083us 00:07:24.106 75.00000% : 9729.575us 00:07:24.107 90.00000% : 11241.945us 00:07:24.107 95.00000% : 12603.077us 00:07:24.107 98.00000% : 13913.797us 00:07:24.107 99.00000% : 15829.465us 00:07:24.107 99.50000% : 21072.345us 00:07:24.107 99.90000% : 26819.348us 00:07:24.107 99.99000% : 27222.646us 00:07:24.107 99.99900% : 27222.646us 00:07:24.107 99.99990% : 27222.646us 00:07:24.107 99.99999% : 27222.646us 00:07:24.107 00:07:24.107 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:24.107 ================================================================================= 00:07:24.107 1.00000% : 6225.920us 00:07:24.107 10.00000% : 6503.188us 00:07:24.107 25.00000% : 6805.662us 00:07:24.107 50.00000% : 7713.083us 00:07:24.107 75.00000% : 9729.575us 00:07:24.107 90.00000% : 11191.532us 00:07:24.107 95.00000% : 12603.077us 00:07:24.107 98.00000% : 13913.797us 00:07:24.107 99.00000% : 15930.289us 00:07:24.107 99.50000% : 19156.677us 00:07:24.107 99.90000% : 24903.680us 00:07:24.107 99.99000% : 25306.978us 00:07:24.107 99.99900% : 25306.978us 00:07:24.107 99.99990% : 25306.978us 00:07:24.107 99.99999% : 25306.978us 00:07:24.107 00:07:24.107 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:24.107 ================================================================================= 00:07:24.107 1.00000% : 6200.714us 00:07:24.107 10.00000% : 6503.188us 00:07:24.107 25.00000% : 6805.662us 00:07:24.107 50.00000% : 7713.083us 00:07:24.107 75.00000% : 9729.575us 00:07:24.107 90.00000% : 11191.532us 00:07:24.107 95.00000% : 12653.489us 00:07:24.107 98.00000% : 14115.446us 00:07:24.107 99.00000% : 15022.868us 00:07:24.107 99.50000% : 18047.606us 00:07:24.107 99.90000% : 23492.135us 00:07:24.107 99.99000% : 23895.434us 00:07:24.107 99.99900% : 23895.434us 00:07:24.107 99.99990% : 23895.434us 00:07:24.107 99.99999% : 23895.434us 00:07:24.107 00:07:24.107 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:24.107 ================================================================================= 00:07:24.107 1.00000% : 6225.920us 00:07:24.107 10.00000% : 6503.188us 00:07:24.107 25.00000% : 6805.662us 00:07:24.107 50.00000% : 7713.083us 00:07:24.107 75.00000% : 9679.163us 00:07:24.107 90.00000% : 11191.532us 00:07:24.107 95.00000% : 12552.665us 00:07:24.107 98.00000% : 14417.920us 00:07:24.107 99.00000% : 15022.868us 00:07:24.107 99.50000% : 15930.289us 00:07:24.107 99.90000% : 21677.292us 00:07:24.107 99.99000% : 22080.591us 00:07:24.107 99.99900% : 22080.591us 00:07:24.107 99.99990% : 22080.591us 00:07:24.107 99.99999% : 22080.591us 00:07:24.107 00:07:24.107 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:24.107 ================================================================================= 00:07:24.107 1.00000% : 6225.920us 00:07:24.107 10.00000% : 6503.188us 00:07:24.107 25.00000% : 6805.662us 00:07:24.107 50.00000% : 7763.495us 00:07:24.107 75.00000% : 9679.163us 00:07:24.107 90.00000% : 11191.532us 00:07:24.107 95.00000% : 12653.489us 00:07:24.107 98.00000% : 14115.446us 00:07:24.107 99.00000% : 14821.218us 00:07:24.107 99.50000% : 15627.815us 00:07:24.107 99.90000% : 19761.625us 00:07:24.107 99.99000% : 20164.923us 00:07:24.107 99.99900% : 20164.923us 00:07:24.107 99.99990% : 20164.923us 00:07:24.107 99.99999% : 20164.923us 00:07:24.107 00:07:24.107 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:24.107 ============================================================================== 00:07:24.107 Range in us Cumulative IO count 00:07:24.107 6099.889 - 6125.095: 0.0133% ( 2) 00:07:24.107 6125.095 - 6150.302: 0.0798% ( 10) 00:07:24.107 6150.302 - 6175.508: 0.3524% ( 41) 00:07:24.107 6175.508 - 6200.714: 0.8245% ( 71) 00:07:24.107 6200.714 - 6225.920: 1.6090% ( 118) 00:07:24.107 6225.920 - 6251.126: 2.3870% ( 117) 00:07:24.107 6251.126 - 6276.332: 3.2580% ( 131) 00:07:24.107 6276.332 - 6301.538: 4.0359% ( 117) 00:07:24.107 6301.538 - 6326.745: 4.8072% ( 116) 00:07:24.107 6326.745 - 6351.951: 5.6117% ( 121) 00:07:24.107 6351.951 - 6377.157: 6.5160% ( 136) 00:07:24.107 6377.157 - 6402.363: 7.3404% ( 124) 00:07:24.107 6402.363 - 6427.569: 8.3444% ( 151) 00:07:24.107 6427.569 - 6452.775: 9.3816% ( 156) 00:07:24.107 6452.775 - 6503.188: 11.2301% ( 278) 00:07:24.107 6503.188 - 6553.600: 13.4441% ( 333) 00:07:24.107 6553.600 - 6604.012: 15.7048% ( 340) 00:07:24.107 6604.012 - 6654.425: 18.0319% ( 350) 00:07:24.107 6654.425 - 6704.837: 20.3258% ( 345) 00:07:24.107 6704.837 - 6755.249: 22.7926% ( 371) 00:07:24.107 6755.249 - 6805.662: 25.2859% ( 375) 00:07:24.107 6805.662 - 6856.074: 27.7527% ( 371) 00:07:24.107 6856.074 - 6906.486: 30.2128% ( 370) 00:07:24.107 6906.486 - 6956.898: 32.6795% ( 371) 00:07:24.107 6956.898 - 7007.311: 35.2527% ( 387) 00:07:24.107 7007.311 - 7057.723: 37.6330% ( 358) 00:07:24.107 7057.723 - 7108.135: 39.5412% ( 287) 00:07:24.107 7108.135 - 7158.548: 41.2434% ( 256) 00:07:24.107 7158.548 - 7208.960: 42.6064% ( 205) 00:07:24.107 7208.960 - 7259.372: 43.8630% ( 189) 00:07:24.107 7259.372 - 7309.785: 44.9601% ( 165) 00:07:24.107 7309.785 - 7360.197: 45.7713% ( 122) 00:07:24.107 7360.197 - 7410.609: 46.5027% ( 110) 00:07:24.107 7410.609 - 7461.022: 47.1742% ( 101) 00:07:24.107 7461.022 - 7511.434: 47.7527% ( 87) 00:07:24.107 7511.434 - 7561.846: 48.3511% ( 90) 00:07:24.107 7561.846 - 7612.258: 48.9229% ( 86) 00:07:24.107 7612.258 - 7662.671: 49.4149% ( 74) 00:07:24.107 7662.671 - 7713.083: 49.8005% ( 58) 00:07:24.107 7713.083 - 7763.495: 50.2726% ( 71) 00:07:24.107 7763.495 - 7813.908: 50.8045% ( 80) 00:07:24.107 7813.908 - 7864.320: 51.2633% ( 69) 00:07:24.107 7864.320 - 7914.732: 51.6290% ( 55) 00:07:24.107 7914.732 - 7965.145: 51.9880% ( 54) 00:07:24.107 7965.145 - 8015.557: 52.4269% ( 66) 00:07:24.107 8015.557 - 8065.969: 52.8790% ( 68) 00:07:24.107 8065.969 - 8116.382: 53.2048% ( 49) 00:07:24.107 8116.382 - 8166.794: 53.5505% ( 52) 00:07:24.107 8166.794 - 8217.206: 54.1290% ( 87) 00:07:24.107 8217.206 - 8267.618: 54.5811% ( 68) 00:07:24.107 8267.618 - 8318.031: 55.0399% ( 69) 00:07:24.107 8318.031 - 8368.443: 55.5120% ( 71) 00:07:24.107 8368.443 - 8418.855: 56.0239% ( 77) 00:07:24.107 8418.855 - 8469.268: 56.5691% ( 82) 00:07:24.107 8469.268 - 8519.680: 56.9814% ( 62) 00:07:24.107 8519.680 - 8570.092: 57.4202% ( 66) 00:07:24.107 8570.092 - 8620.505: 57.8324% ( 62) 00:07:24.107 8620.505 - 8670.917: 58.2048% ( 56) 00:07:24.107 8670.917 - 8721.329: 58.6037% ( 60) 00:07:24.107 8721.329 - 8771.742: 59.0293% ( 64) 00:07:24.107 8771.742 - 8822.154: 59.5678% ( 81) 00:07:24.107 8822.154 - 8872.566: 60.1130% ( 82) 00:07:24.107 8872.566 - 8922.978: 60.8843% ( 116) 00:07:24.107 8922.978 - 8973.391: 61.6489% ( 115) 00:07:24.107 8973.391 - 9023.803: 62.6596% ( 152) 00:07:24.107 9023.803 - 9074.215: 63.6037% ( 142) 00:07:24.107 9074.215 - 9124.628: 64.5279% ( 139) 00:07:24.107 9124.628 - 9175.040: 65.4854% ( 144) 00:07:24.107 9175.040 - 9225.452: 66.4960% ( 152) 00:07:24.107 9225.452 - 9275.865: 67.4335% ( 141) 00:07:24.107 9275.865 - 9326.277: 68.3378% ( 136) 00:07:24.107 9326.277 - 9376.689: 69.3750% ( 156) 00:07:24.107 9376.689 - 9427.102: 70.4721% ( 165) 00:07:24.107 9427.102 - 9477.514: 71.4694% ( 150) 00:07:24.107 9477.514 - 9527.926: 72.4601% ( 149) 00:07:24.107 9527.926 - 9578.338: 73.4441% ( 148) 00:07:24.107 9578.338 - 9628.751: 74.4747% ( 155) 00:07:24.107 9628.751 - 9679.163: 75.4987% ( 154) 00:07:24.107 9679.163 - 9729.575: 76.4761% ( 147) 00:07:24.107 9729.575 - 9779.988: 77.3936% ( 138) 00:07:24.107 9779.988 - 9830.400: 78.2447% ( 128) 00:07:24.107 9830.400 - 9880.812: 78.9960% ( 113) 00:07:24.107 9880.812 - 9931.225: 79.6410% ( 97) 00:07:24.107 9931.225 - 9981.637: 80.1862% ( 82) 00:07:24.107 9981.637 - 10032.049: 80.6848% ( 75) 00:07:24.107 10032.049 - 10082.462: 81.2035% ( 78) 00:07:24.107 10082.462 - 10132.874: 81.6090% ( 61) 00:07:24.107 10132.874 - 10183.286: 81.9681% ( 54) 00:07:24.107 10183.286 - 10233.698: 82.3604% ( 59) 00:07:24.107 10233.698 - 10284.111: 82.6928% ( 50) 00:07:24.107 10284.111 - 10334.523: 82.9987% ( 46) 00:07:24.107 10334.523 - 10384.935: 83.3112% ( 47) 00:07:24.107 10384.935 - 10435.348: 83.6902% ( 57) 00:07:24.107 10435.348 - 10485.760: 84.0559% ( 55) 00:07:24.107 10485.760 - 10536.172: 84.4481% ( 59) 00:07:24.107 10536.172 - 10586.585: 84.8205% ( 56) 00:07:24.107 10586.585 - 10636.997: 85.1330% ( 47) 00:07:24.107 10636.997 - 10687.409: 85.5319% ( 60) 00:07:24.107 10687.409 - 10737.822: 85.9774% ( 67) 00:07:24.107 10737.822 - 10788.234: 86.4162% ( 66) 00:07:24.107 10788.234 - 10838.646: 86.8484% ( 65) 00:07:24.107 10838.646 - 10889.058: 87.2739% ( 64) 00:07:24.107 10889.058 - 10939.471: 87.7394% ( 70) 00:07:24.107 10939.471 - 10989.883: 88.1981% ( 69) 00:07:24.107 10989.883 - 11040.295: 88.6037% ( 61) 00:07:24.107 11040.295 - 11090.708: 88.9894% ( 58) 00:07:24.107 11090.708 - 11141.120: 89.3684% ( 57) 00:07:24.107 11141.120 - 11191.532: 89.7606% ( 59) 00:07:24.107 11191.532 - 11241.945: 90.1130% ( 53) 00:07:24.107 11241.945 - 11292.357: 90.4588% ( 52) 00:07:24.107 11292.357 - 11342.769: 90.8178% ( 54) 00:07:24.107 11342.769 - 11393.182: 91.0838% ( 40) 00:07:24.107 11393.182 - 11443.594: 91.3564% ( 41) 00:07:24.107 11443.594 - 11494.006: 91.6290% ( 41) 00:07:24.107 11494.006 - 11544.418: 91.9548% ( 49) 00:07:24.107 11544.418 - 11594.831: 92.2074% ( 38) 00:07:24.107 11594.831 - 11645.243: 92.4468% ( 36) 00:07:24.107 11645.243 - 11695.655: 92.6596% ( 32) 00:07:24.107 11695.655 - 11746.068: 92.8059% ( 22) 00:07:24.107 11746.068 - 11796.480: 92.9322% ( 19) 00:07:24.107 11796.480 - 11846.892: 93.0386% ( 16) 00:07:24.107 11846.892 - 11897.305: 93.1516% ( 17) 00:07:24.108 11897.305 - 11947.717: 93.2513% ( 15) 00:07:24.108 11947.717 - 11998.129: 93.3577% ( 16) 00:07:24.108 11998.129 - 12048.542: 93.4641% ( 16) 00:07:24.108 12048.542 - 12098.954: 93.5971% ( 20) 00:07:24.108 12098.954 - 12149.366: 93.7301% ( 20) 00:07:24.108 12149.366 - 12199.778: 93.8630% ( 20) 00:07:24.108 12199.778 - 12250.191: 94.0160% ( 23) 00:07:24.108 12250.191 - 12300.603: 94.1556% ( 21) 00:07:24.108 12300.603 - 12351.015: 94.2952% ( 21) 00:07:24.108 12351.015 - 12401.428: 94.4282% ( 20) 00:07:24.108 12401.428 - 12451.840: 94.5545% ( 19) 00:07:24.108 12451.840 - 12502.252: 94.6676% ( 17) 00:07:24.108 12502.252 - 12552.665: 94.7673% ( 15) 00:07:24.108 12552.665 - 12603.077: 94.8936% ( 19) 00:07:24.108 12603.077 - 12653.489: 94.9934% ( 15) 00:07:24.108 12653.489 - 12703.902: 95.1263% ( 20) 00:07:24.108 12703.902 - 12754.314: 95.2660% ( 21) 00:07:24.108 12754.314 - 12804.726: 95.3856% ( 18) 00:07:24.108 12804.726 - 12855.138: 95.5918% ( 31) 00:07:24.108 12855.138 - 12905.551: 95.7513% ( 24) 00:07:24.108 12905.551 - 13006.375: 96.0638% ( 47) 00:07:24.108 13006.375 - 13107.200: 96.3564% ( 44) 00:07:24.108 13107.200 - 13208.025: 96.6223% ( 40) 00:07:24.108 13208.025 - 13308.849: 96.8351% ( 32) 00:07:24.108 13308.849 - 13409.674: 97.0678% ( 35) 00:07:24.108 13409.674 - 13510.498: 97.3138% ( 37) 00:07:24.108 13510.498 - 13611.323: 97.5133% ( 30) 00:07:24.108 13611.323 - 13712.148: 97.6795% ( 25) 00:07:24.108 13712.148 - 13812.972: 97.8590% ( 27) 00:07:24.108 13812.972 - 13913.797: 98.0053% ( 22) 00:07:24.108 13913.797 - 14014.622: 98.0918% ( 13) 00:07:24.108 14014.622 - 14115.446: 98.2048% ( 17) 00:07:24.108 14115.446 - 14216.271: 98.3112% ( 16) 00:07:24.108 14216.271 - 14317.095: 98.3777% ( 10) 00:07:24.108 14317.095 - 14417.920: 98.4441% ( 10) 00:07:24.108 14417.920 - 14518.745: 98.5106% ( 10) 00:07:24.108 14518.745 - 14619.569: 98.5771% ( 10) 00:07:24.108 14619.569 - 14720.394: 98.6303% ( 8) 00:07:24.108 14720.394 - 14821.218: 98.6702% ( 6) 00:07:24.108 14821.218 - 14922.043: 98.6968% ( 4) 00:07:24.108 14922.043 - 15022.868: 98.7234% ( 4) 00:07:24.108 15022.868 - 15123.692: 98.7766% ( 8) 00:07:24.108 15123.692 - 15224.517: 98.8231% ( 7) 00:07:24.108 15224.517 - 15325.342: 98.8630% ( 6) 00:07:24.108 15325.342 - 15426.166: 98.9029% ( 6) 00:07:24.108 15426.166 - 15526.991: 98.9561% ( 8) 00:07:24.108 15526.991 - 15627.815: 98.9960% ( 6) 00:07:24.108 15627.815 - 15728.640: 99.0426% ( 7) 00:07:24.108 15728.640 - 15829.465: 99.0824% ( 6) 00:07:24.108 15829.465 - 15930.289: 99.1290% ( 7) 00:07:24.108 15930.289 - 16031.114: 99.1489% ( 3) 00:07:24.108 20467.397 - 20568.222: 99.1556% ( 1) 00:07:24.108 20568.222 - 20669.046: 99.1622% ( 1) 00:07:24.108 20669.046 - 20769.871: 99.1888% ( 4) 00:07:24.108 20769.871 - 20870.695: 99.2088% ( 3) 00:07:24.108 20870.695 - 20971.520: 99.2354% ( 4) 00:07:24.108 20971.520 - 21072.345: 99.2620% ( 4) 00:07:24.108 21072.345 - 21173.169: 99.2886% ( 4) 00:07:24.108 21173.169 - 21273.994: 99.3152% ( 4) 00:07:24.108 21273.994 - 21374.818: 99.3418% ( 4) 00:07:24.108 21374.818 - 21475.643: 99.3684% ( 4) 00:07:24.108 21475.643 - 21576.468: 99.3883% ( 3) 00:07:24.108 21576.468 - 21677.292: 99.4149% ( 4) 00:07:24.108 21677.292 - 21778.117: 99.4415% ( 4) 00:07:24.108 21778.117 - 21878.942: 99.4681% ( 4) 00:07:24.108 21878.942 - 21979.766: 99.4947% ( 4) 00:07:24.108 21979.766 - 22080.591: 99.5213% ( 4) 00:07:24.108 22080.591 - 22181.415: 99.5479% ( 4) 00:07:24.108 22181.415 - 22282.240: 99.5745% ( 4) 00:07:24.108 26819.348 - 27020.997: 99.5811% ( 1) 00:07:24.108 27020.997 - 27222.646: 99.6343% ( 8) 00:07:24.108 27222.646 - 27424.295: 99.6875% ( 8) 00:07:24.108 27424.295 - 27625.945: 99.7407% ( 8) 00:07:24.108 27625.945 - 27827.594: 99.7872% ( 7) 00:07:24.108 27827.594 - 28029.243: 99.8404% ( 8) 00:07:24.108 28029.243 - 28230.892: 99.8936% ( 8) 00:07:24.108 28230.892 - 28432.542: 99.9468% ( 8) 00:07:24.108 28432.542 - 28634.191: 100.0000% ( 8) 00:07:24.108 00:07:24.108 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:24.108 ============================================================================== 00:07:24.108 Range in us Cumulative IO count 00:07:24.108 5999.065 - 6024.271: 0.0199% ( 3) 00:07:24.108 6024.271 - 6049.477: 0.0665% ( 7) 00:07:24.108 6049.477 - 6074.683: 0.1928% ( 19) 00:07:24.108 6074.683 - 6099.889: 0.6516% ( 69) 00:07:24.108 6099.889 - 6125.095: 1.2168% ( 85) 00:07:24.108 6125.095 - 6150.302: 1.7487% ( 80) 00:07:24.108 6150.302 - 6175.508: 2.3404% ( 89) 00:07:24.108 6175.508 - 6200.714: 2.8856% ( 82) 00:07:24.108 6200.714 - 6225.920: 3.4774% ( 89) 00:07:24.108 6225.920 - 6251.126: 4.1090% ( 95) 00:07:24.108 6251.126 - 6276.332: 4.7939% ( 103) 00:07:24.108 6276.332 - 6301.538: 5.5851% ( 119) 00:07:24.108 6301.538 - 6326.745: 6.4029% ( 123) 00:07:24.108 6326.745 - 6351.951: 7.2739% ( 131) 00:07:24.108 6351.951 - 6377.157: 8.1383% ( 130) 00:07:24.108 6377.157 - 6402.363: 9.1290% ( 149) 00:07:24.108 6402.363 - 6427.569: 9.9003% ( 116) 00:07:24.108 6427.569 - 6452.775: 10.9441% ( 157) 00:07:24.108 6452.775 - 6503.188: 12.8391% ( 285) 00:07:24.108 6503.188 - 6553.600: 14.8537% ( 303) 00:07:24.108 6553.600 - 6604.012: 16.7686% ( 288) 00:07:24.108 6604.012 - 6654.425: 18.7699% ( 301) 00:07:24.108 6654.425 - 6704.837: 20.8577% ( 314) 00:07:24.108 6704.837 - 6755.249: 22.8723% ( 303) 00:07:24.108 6755.249 - 6805.662: 25.0399% ( 326) 00:07:24.108 6805.662 - 6856.074: 27.1077% ( 311) 00:07:24.108 6856.074 - 6906.486: 29.4016% ( 345) 00:07:24.108 6906.486 - 6956.898: 31.4827% ( 313) 00:07:24.108 6956.898 - 7007.311: 33.7633% ( 343) 00:07:24.108 7007.311 - 7057.723: 36.0106% ( 338) 00:07:24.108 7057.723 - 7108.135: 38.3178% ( 347) 00:07:24.108 7108.135 - 7158.548: 40.4388% ( 319) 00:07:24.108 7158.548 - 7208.960: 42.0279% ( 239) 00:07:24.108 7208.960 - 7259.372: 43.3644% ( 201) 00:07:24.108 7259.372 - 7309.785: 44.5944% ( 185) 00:07:24.108 7309.785 - 7360.197: 45.7247% ( 170) 00:07:24.108 7360.197 - 7410.609: 46.5625% ( 126) 00:07:24.108 7410.609 - 7461.022: 47.3537% ( 119) 00:07:24.108 7461.022 - 7511.434: 48.0319% ( 102) 00:07:24.108 7511.434 - 7561.846: 48.6303% ( 90) 00:07:24.108 7561.846 - 7612.258: 49.1090% ( 72) 00:07:24.108 7612.258 - 7662.671: 49.5678% ( 69) 00:07:24.108 7662.671 - 7713.083: 50.0931% ( 79) 00:07:24.108 7713.083 - 7763.495: 50.4987% ( 61) 00:07:24.108 7763.495 - 7813.908: 50.9973% ( 75) 00:07:24.108 7813.908 - 7864.320: 51.4029% ( 61) 00:07:24.108 7864.320 - 7914.732: 51.8085% ( 61) 00:07:24.108 7914.732 - 7965.145: 52.3072% ( 75) 00:07:24.108 7965.145 - 8015.557: 52.7394% ( 65) 00:07:24.108 8015.557 - 8065.969: 53.1184% ( 57) 00:07:24.108 8065.969 - 8116.382: 53.5306% ( 62) 00:07:24.108 8116.382 - 8166.794: 53.9029% ( 56) 00:07:24.108 8166.794 - 8217.206: 54.2952% ( 59) 00:07:24.108 8217.206 - 8267.618: 54.7407% ( 67) 00:07:24.108 8267.618 - 8318.031: 55.1596% ( 63) 00:07:24.108 8318.031 - 8368.443: 55.6051% ( 67) 00:07:24.108 8368.443 - 8418.855: 56.0173% ( 62) 00:07:24.108 8418.855 - 8469.268: 56.4029% ( 58) 00:07:24.108 8469.268 - 8519.680: 56.8484% ( 67) 00:07:24.108 8519.680 - 8570.092: 57.2673% ( 63) 00:07:24.108 8570.092 - 8620.505: 57.7061% ( 66) 00:07:24.108 8620.505 - 8670.917: 58.2181% ( 77) 00:07:24.108 8670.917 - 8721.329: 58.7500% ( 80) 00:07:24.108 8721.329 - 8771.742: 59.3484% ( 90) 00:07:24.108 8771.742 - 8822.154: 59.9269% ( 87) 00:07:24.108 8822.154 - 8872.566: 60.5519% ( 94) 00:07:24.108 8872.566 - 8922.978: 61.3963% ( 127) 00:07:24.108 8922.978 - 8973.391: 62.3072% ( 137) 00:07:24.108 8973.391 - 9023.803: 63.1250% ( 123) 00:07:24.108 9023.803 - 9074.215: 63.8697% ( 112) 00:07:24.108 9074.215 - 9124.628: 64.7806% ( 137) 00:07:24.108 9124.628 - 9175.040: 65.6981% ( 138) 00:07:24.108 9175.040 - 9225.452: 66.5957% ( 135) 00:07:24.108 9225.452 - 9275.865: 67.5000% ( 136) 00:07:24.108 9275.865 - 9326.277: 68.3378% ( 126) 00:07:24.108 9326.277 - 9376.689: 69.2420% ( 136) 00:07:24.108 9376.689 - 9427.102: 70.0798% ( 126) 00:07:24.108 9427.102 - 9477.514: 71.0771% ( 150) 00:07:24.108 9477.514 - 9527.926: 71.8351% ( 114) 00:07:24.108 9527.926 - 9578.338: 72.7261% ( 134) 00:07:24.108 9578.338 - 9628.751: 73.7035% ( 147) 00:07:24.108 9628.751 - 9679.163: 74.5878% ( 133) 00:07:24.108 9679.163 - 9729.575: 75.4056% ( 123) 00:07:24.108 9729.575 - 9779.988: 76.1503% ( 112) 00:07:24.108 9779.988 - 9830.400: 77.1941% ( 157) 00:07:24.108 9830.400 - 9880.812: 77.9189% ( 109) 00:07:24.108 9880.812 - 9931.225: 78.6569% ( 111) 00:07:24.108 9931.225 - 9981.637: 79.3418% ( 103) 00:07:24.108 9981.637 - 10032.049: 80.0000% ( 99) 00:07:24.108 10032.049 - 10082.462: 80.4920% ( 74) 00:07:24.108 10082.462 - 10132.874: 80.8511% ( 54) 00:07:24.108 10132.874 - 10183.286: 81.3298% ( 72) 00:07:24.108 10183.286 - 10233.698: 81.7154% ( 58) 00:07:24.108 10233.698 - 10284.111: 82.1543% ( 66) 00:07:24.108 10284.111 - 10334.523: 82.5332% ( 57) 00:07:24.108 10334.523 - 10384.935: 82.9455% ( 62) 00:07:24.108 10384.935 - 10435.348: 83.3311% ( 58) 00:07:24.108 10435.348 - 10485.760: 83.7766% ( 67) 00:07:24.108 10485.760 - 10536.172: 84.3617% ( 88) 00:07:24.108 10536.172 - 10586.585: 84.8271% ( 70) 00:07:24.108 10586.585 - 10636.997: 85.1928% ( 55) 00:07:24.108 10636.997 - 10687.409: 85.5984% ( 61) 00:07:24.108 10687.409 - 10737.822: 86.0638% ( 70) 00:07:24.108 10737.822 - 10788.234: 86.4428% ( 57) 00:07:24.108 10788.234 - 10838.646: 86.8750% ( 65) 00:07:24.108 10838.646 - 10889.058: 87.3338% ( 69) 00:07:24.108 10889.058 - 10939.471: 87.7660% ( 65) 00:07:24.109 10939.471 - 10989.883: 88.1715% ( 61) 00:07:24.109 10989.883 - 11040.295: 88.5239% ( 53) 00:07:24.109 11040.295 - 11090.708: 88.9162% ( 59) 00:07:24.109 11090.708 - 11141.120: 89.2620% ( 52) 00:07:24.109 11141.120 - 11191.532: 89.7540% ( 74) 00:07:24.109 11191.532 - 11241.945: 90.0864% ( 50) 00:07:24.109 11241.945 - 11292.357: 90.4455% ( 54) 00:07:24.109 11292.357 - 11342.769: 90.7247% ( 42) 00:07:24.109 11342.769 - 11393.182: 91.0838% ( 54) 00:07:24.109 11393.182 - 11443.594: 91.4029% ( 48) 00:07:24.109 11443.594 - 11494.006: 91.6489% ( 37) 00:07:24.109 11494.006 - 11544.418: 91.9215% ( 41) 00:07:24.109 11544.418 - 11594.831: 92.1476% ( 34) 00:07:24.109 11594.831 - 11645.243: 92.3803% ( 35) 00:07:24.109 11645.243 - 11695.655: 92.6263% ( 37) 00:07:24.109 11695.655 - 11746.068: 92.7726% ( 22) 00:07:24.109 11746.068 - 11796.480: 92.9189% ( 22) 00:07:24.109 11796.480 - 11846.892: 93.1051% ( 28) 00:07:24.109 11846.892 - 11897.305: 93.2646% ( 24) 00:07:24.109 11897.305 - 11947.717: 93.4508% ( 28) 00:07:24.109 11947.717 - 11998.129: 93.6104% ( 24) 00:07:24.109 11998.129 - 12048.542: 93.7367% ( 19) 00:07:24.109 12048.542 - 12098.954: 93.8564% ( 18) 00:07:24.109 12098.954 - 12149.366: 93.9827% ( 19) 00:07:24.109 12149.366 - 12199.778: 94.0492% ( 10) 00:07:24.109 12199.778 - 12250.191: 94.1489% ( 15) 00:07:24.109 12250.191 - 12300.603: 94.2553% ( 16) 00:07:24.109 12300.603 - 12351.015: 94.4082% ( 23) 00:07:24.109 12351.015 - 12401.428: 94.6343% ( 34) 00:07:24.109 12401.428 - 12451.840: 94.7473% ( 17) 00:07:24.109 12451.840 - 12502.252: 94.8737% ( 19) 00:07:24.109 12502.252 - 12552.665: 94.9734% ( 15) 00:07:24.109 12552.665 - 12603.077: 95.1662% ( 29) 00:07:24.109 12603.077 - 12653.489: 95.3324% ( 25) 00:07:24.109 12653.489 - 12703.902: 95.4854% ( 23) 00:07:24.109 12703.902 - 12754.314: 95.6117% ( 19) 00:07:24.109 12754.314 - 12804.726: 95.7314% ( 18) 00:07:24.109 12804.726 - 12855.138: 95.9043% ( 26) 00:07:24.109 12855.138 - 12905.551: 96.0306% ( 19) 00:07:24.109 12905.551 - 13006.375: 96.2434% ( 32) 00:07:24.109 13006.375 - 13107.200: 96.4694% ( 34) 00:07:24.109 13107.200 - 13208.025: 96.6888% ( 33) 00:07:24.109 13208.025 - 13308.849: 96.8750% ( 28) 00:07:24.109 13308.849 - 13409.674: 97.1077% ( 35) 00:07:24.109 13409.674 - 13510.498: 97.2872% ( 27) 00:07:24.109 13510.498 - 13611.323: 97.5266% ( 36) 00:07:24.109 13611.323 - 13712.148: 97.6729% ( 22) 00:07:24.109 13712.148 - 13812.972: 97.8590% ( 28) 00:07:24.109 13812.972 - 13913.797: 98.0120% ( 23) 00:07:24.109 13913.797 - 14014.622: 98.1782% ( 25) 00:07:24.109 14014.622 - 14115.446: 98.2979% ( 18) 00:07:24.109 14115.446 - 14216.271: 98.4109% ( 17) 00:07:24.109 14216.271 - 14317.095: 98.4774% ( 10) 00:07:24.109 14317.095 - 14417.920: 98.5705% ( 14) 00:07:24.109 14417.920 - 14518.745: 98.6370% ( 10) 00:07:24.109 14518.745 - 14619.569: 98.7101% ( 11) 00:07:24.109 14619.569 - 14720.394: 98.7234% ( 2) 00:07:24.109 15022.868 - 15123.692: 98.7633% ( 6) 00:07:24.109 15123.692 - 15224.517: 98.7899% ( 4) 00:07:24.109 15224.517 - 15325.342: 98.8231% ( 5) 00:07:24.109 15325.342 - 15426.166: 98.8630% ( 6) 00:07:24.109 15426.166 - 15526.991: 98.8963% ( 5) 00:07:24.109 15526.991 - 15627.815: 98.9362% ( 6) 00:07:24.109 15627.815 - 15728.640: 98.9694% ( 5) 00:07:24.109 15728.640 - 15829.465: 99.0093% ( 6) 00:07:24.109 15829.465 - 15930.289: 99.0492% ( 6) 00:07:24.109 15930.289 - 16031.114: 99.0559% ( 1) 00:07:24.109 16031.114 - 16131.938: 99.1223% ( 10) 00:07:24.109 16131.938 - 16232.763: 99.1489% ( 4) 00:07:24.109 19459.151 - 19559.975: 99.1622% ( 2) 00:07:24.109 19559.975 - 19660.800: 99.1888% ( 4) 00:07:24.109 19660.800 - 19761.625: 99.2088% ( 3) 00:07:24.109 19761.625 - 19862.449: 99.2287% ( 3) 00:07:24.109 19862.449 - 19963.274: 99.2553% ( 4) 00:07:24.109 19963.274 - 20064.098: 99.2753% ( 3) 00:07:24.109 20064.098 - 20164.923: 99.2952% ( 3) 00:07:24.109 20164.923 - 20265.748: 99.3218% ( 4) 00:07:24.109 20265.748 - 20366.572: 99.3484% ( 4) 00:07:24.109 20366.572 - 20467.397: 99.3750% ( 4) 00:07:24.109 20467.397 - 20568.222: 99.3949% ( 3) 00:07:24.109 20568.222 - 20669.046: 99.4215% ( 4) 00:07:24.109 20669.046 - 20769.871: 99.4481% ( 4) 00:07:24.109 20769.871 - 20870.695: 99.4681% ( 3) 00:07:24.109 20870.695 - 20971.520: 99.4880% ( 3) 00:07:24.109 20971.520 - 21072.345: 99.5146% ( 4) 00:07:24.109 21072.345 - 21173.169: 99.5346% ( 3) 00:07:24.109 21173.169 - 21273.994: 99.5612% ( 4) 00:07:24.109 21273.994 - 21374.818: 99.5745% ( 2) 00:07:24.109 25206.154 - 25306.978: 99.5878% ( 2) 00:07:24.109 25306.978 - 25407.803: 99.6144% ( 4) 00:07:24.109 25407.803 - 25508.628: 99.6343% ( 3) 00:07:24.109 25508.628 - 25609.452: 99.6609% ( 4) 00:07:24.109 25609.452 - 25710.277: 99.6809% ( 3) 00:07:24.109 25710.277 - 25811.102: 99.7008% ( 3) 00:07:24.109 25811.102 - 26012.751: 99.7540% ( 8) 00:07:24.109 26012.751 - 26214.400: 99.8005% ( 7) 00:07:24.109 26214.400 - 26416.049: 99.8471% ( 7) 00:07:24.109 26416.049 - 26617.698: 99.8936% ( 7) 00:07:24.109 26617.698 - 26819.348: 99.9335% ( 6) 00:07:24.109 26819.348 - 27020.997: 99.9867% ( 8) 00:07:24.109 27020.997 - 27222.646: 100.0000% ( 2) 00:07:24.109 00:07:24.109 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:24.109 ============================================================================== 00:07:24.109 Range in us Cumulative IO count 00:07:24.109 6099.889 - 6125.095: 0.0266% ( 4) 00:07:24.109 6125.095 - 6150.302: 0.0731% ( 7) 00:07:24.109 6150.302 - 6175.508: 0.1662% ( 14) 00:07:24.109 6175.508 - 6200.714: 0.4255% ( 39) 00:07:24.109 6200.714 - 6225.920: 1.0705% ( 97) 00:07:24.109 6225.920 - 6251.126: 2.0213% ( 143) 00:07:24.109 6251.126 - 6276.332: 2.9255% ( 136) 00:07:24.109 6276.332 - 6301.538: 3.7832% ( 129) 00:07:24.109 6301.538 - 6326.745: 4.8936% ( 167) 00:07:24.109 6326.745 - 6351.951: 5.8178% ( 139) 00:07:24.109 6351.951 - 6377.157: 6.6024% ( 118) 00:07:24.109 6377.157 - 6402.363: 7.4601% ( 129) 00:07:24.109 6402.363 - 6427.569: 8.5439% ( 163) 00:07:24.109 6427.569 - 6452.775: 9.4016% ( 129) 00:07:24.109 6452.775 - 6503.188: 11.3697% ( 296) 00:07:24.109 6503.188 - 6553.600: 13.8298% ( 370) 00:07:24.109 6553.600 - 6604.012: 16.1436% ( 348) 00:07:24.109 6604.012 - 6654.425: 18.3378% ( 330) 00:07:24.109 6654.425 - 6704.837: 20.6981% ( 355) 00:07:24.109 6704.837 - 6755.249: 23.0918% ( 360) 00:07:24.109 6755.249 - 6805.662: 25.6051% ( 378) 00:07:24.109 6805.662 - 6856.074: 28.0452% ( 367) 00:07:24.109 6856.074 - 6906.486: 30.5585% ( 378) 00:07:24.109 6906.486 - 6956.898: 32.9854% ( 365) 00:07:24.109 6956.898 - 7007.311: 35.5120% ( 380) 00:07:24.109 7007.311 - 7057.723: 37.9854% ( 372) 00:07:24.109 7057.723 - 7108.135: 39.9269% ( 292) 00:07:24.109 7108.135 - 7158.548: 41.5492% ( 244) 00:07:24.109 7158.548 - 7208.960: 42.9787% ( 215) 00:07:24.109 7208.960 - 7259.372: 44.2819% ( 196) 00:07:24.109 7259.372 - 7309.785: 45.4189% ( 171) 00:07:24.109 7309.785 - 7360.197: 46.3231% ( 136) 00:07:24.109 7360.197 - 7410.609: 47.1809% ( 129) 00:07:24.109 7410.609 - 7461.022: 47.8324% ( 98) 00:07:24.109 7461.022 - 7511.434: 48.4309% ( 90) 00:07:24.109 7511.434 - 7561.846: 48.9362% ( 76) 00:07:24.109 7561.846 - 7612.258: 49.4348% ( 75) 00:07:24.109 7612.258 - 7662.671: 49.8870% ( 68) 00:07:24.109 7662.671 - 7713.083: 50.3524% ( 70) 00:07:24.109 7713.083 - 7763.495: 50.7846% ( 65) 00:07:24.109 7763.495 - 7813.908: 51.1769% ( 59) 00:07:24.109 7813.908 - 7864.320: 51.6622% ( 73) 00:07:24.109 7864.320 - 7914.732: 52.1410% ( 72) 00:07:24.109 7914.732 - 7965.145: 52.6862% ( 82) 00:07:24.109 7965.145 - 8015.557: 53.0984% ( 62) 00:07:24.109 8015.557 - 8065.969: 53.4375% ( 51) 00:07:24.109 8065.969 - 8116.382: 53.8165% ( 57) 00:07:24.109 8116.382 - 8166.794: 54.1689% ( 53) 00:07:24.109 8166.794 - 8217.206: 54.5279% ( 54) 00:07:24.109 8217.206 - 8267.618: 54.8471% ( 48) 00:07:24.109 8267.618 - 8318.031: 55.1396% ( 44) 00:07:24.109 8318.031 - 8368.443: 55.4654% ( 49) 00:07:24.109 8368.443 - 8418.855: 55.7846% ( 48) 00:07:24.109 8418.855 - 8469.268: 56.1170% ( 50) 00:07:24.109 8469.268 - 8519.680: 56.5093% ( 59) 00:07:24.109 8519.680 - 8570.092: 56.9149% ( 61) 00:07:24.109 8570.092 - 8620.505: 57.4202% ( 76) 00:07:24.109 8620.505 - 8670.917: 57.9255% ( 76) 00:07:24.109 8670.917 - 8721.329: 58.3178% ( 59) 00:07:24.109 8721.329 - 8771.742: 58.6968% ( 57) 00:07:24.109 8771.742 - 8822.154: 59.1423% ( 67) 00:07:24.109 8822.154 - 8872.566: 59.7606% ( 93) 00:07:24.109 8872.566 - 8922.978: 60.4654% ( 106) 00:07:24.109 8922.978 - 8973.391: 61.2168% ( 113) 00:07:24.109 8973.391 - 9023.803: 61.9614% ( 112) 00:07:24.109 9023.803 - 9074.215: 62.9455% ( 148) 00:07:24.109 9074.215 - 9124.628: 63.9428% ( 150) 00:07:24.109 9124.628 - 9175.040: 65.0266% ( 163) 00:07:24.109 9175.040 - 9225.452: 65.9907% ( 145) 00:07:24.109 9225.452 - 9275.865: 66.9814% ( 149) 00:07:24.109 9275.865 - 9326.277: 67.9255% ( 142) 00:07:24.109 9326.277 - 9376.689: 68.8963% ( 146) 00:07:24.109 9376.689 - 9427.102: 69.9468% ( 158) 00:07:24.109 9427.102 - 9477.514: 70.9707% ( 154) 00:07:24.109 9477.514 - 9527.926: 71.8816% ( 137) 00:07:24.109 9527.926 - 9578.338: 72.7926% ( 137) 00:07:24.109 9578.338 - 9628.751: 73.7500% ( 144) 00:07:24.109 9628.751 - 9679.163: 74.6609% ( 137) 00:07:24.110 9679.163 - 9729.575: 75.5585% ( 135) 00:07:24.110 9729.575 - 9779.988: 76.4428% ( 133) 00:07:24.110 9779.988 - 9830.400: 77.2141% ( 116) 00:07:24.110 9830.400 - 9880.812: 77.8856% ( 101) 00:07:24.110 9880.812 - 9931.225: 78.6104% ( 109) 00:07:24.110 9931.225 - 9981.637: 79.3019% ( 104) 00:07:24.110 9981.637 - 10032.049: 79.8604% ( 84) 00:07:24.110 10032.049 - 10082.462: 80.3790% ( 78) 00:07:24.110 10082.462 - 10132.874: 80.8910% ( 77) 00:07:24.110 10132.874 - 10183.286: 81.4295% ( 81) 00:07:24.110 10183.286 - 10233.698: 81.9814% ( 83) 00:07:24.110 10233.698 - 10284.111: 82.4535% ( 71) 00:07:24.110 10284.111 - 10334.523: 82.8989% ( 67) 00:07:24.110 10334.523 - 10384.935: 83.3777% ( 72) 00:07:24.110 10384.935 - 10435.348: 83.8630% ( 73) 00:07:24.110 10435.348 - 10485.760: 84.3218% ( 69) 00:07:24.110 10485.760 - 10536.172: 84.8005% ( 72) 00:07:24.110 10536.172 - 10586.585: 85.3923% ( 89) 00:07:24.110 10586.585 - 10636.997: 85.8444% ( 68) 00:07:24.110 10636.997 - 10687.409: 86.2633% ( 63) 00:07:24.110 10687.409 - 10737.822: 86.7686% ( 76) 00:07:24.110 10737.822 - 10788.234: 87.2473% ( 72) 00:07:24.110 10788.234 - 10838.646: 87.6795% ( 65) 00:07:24.110 10838.646 - 10889.058: 88.0452% ( 55) 00:07:24.110 10889.058 - 10939.471: 88.3710% ( 49) 00:07:24.110 10939.471 - 10989.883: 88.7101% ( 51) 00:07:24.110 10989.883 - 11040.295: 89.0758% ( 55) 00:07:24.110 11040.295 - 11090.708: 89.4348% ( 54) 00:07:24.110 11090.708 - 11141.120: 89.7739% ( 51) 00:07:24.110 11141.120 - 11191.532: 90.1130% ( 51) 00:07:24.110 11191.532 - 11241.945: 90.4322% ( 48) 00:07:24.110 11241.945 - 11292.357: 90.7513% ( 48) 00:07:24.110 11292.357 - 11342.769: 91.0173% ( 40) 00:07:24.110 11342.769 - 11393.182: 91.2832% ( 40) 00:07:24.110 11393.182 - 11443.594: 91.5293% ( 37) 00:07:24.110 11443.594 - 11494.006: 91.7952% ( 40) 00:07:24.110 11494.006 - 11544.418: 92.0279% ( 35) 00:07:24.110 11544.418 - 11594.831: 92.2340% ( 31) 00:07:24.110 11594.831 - 11645.243: 92.4269% ( 29) 00:07:24.110 11645.243 - 11695.655: 92.5532% ( 19) 00:07:24.110 11695.655 - 11746.068: 92.6662% ( 17) 00:07:24.110 11746.068 - 11796.480: 92.7593% ( 14) 00:07:24.110 11796.480 - 11846.892: 92.8790% ( 18) 00:07:24.110 11846.892 - 11897.305: 92.9787% ( 15) 00:07:24.110 11897.305 - 11947.717: 93.0652% ( 13) 00:07:24.110 11947.717 - 11998.129: 93.1981% ( 20) 00:07:24.110 11998.129 - 12048.542: 93.3311% ( 20) 00:07:24.110 12048.542 - 12098.954: 93.4574% ( 19) 00:07:24.110 12098.954 - 12149.366: 93.5705% ( 17) 00:07:24.110 12149.366 - 12199.778: 93.7168% ( 22) 00:07:24.110 12199.778 - 12250.191: 93.8963% ( 27) 00:07:24.110 12250.191 - 12300.603: 94.0492% ( 23) 00:07:24.110 12300.603 - 12351.015: 94.1888% ( 21) 00:07:24.110 12351.015 - 12401.428: 94.3484% ( 24) 00:07:24.110 12401.428 - 12451.840: 94.5013% ( 23) 00:07:24.110 12451.840 - 12502.252: 94.7008% ( 30) 00:07:24.110 12502.252 - 12552.665: 94.8737% ( 26) 00:07:24.110 12552.665 - 12603.077: 95.0332% ( 24) 00:07:24.110 12603.077 - 12653.489: 95.1795% ( 22) 00:07:24.110 12653.489 - 12703.902: 95.3258% ( 22) 00:07:24.110 12703.902 - 12754.314: 95.4654% ( 21) 00:07:24.110 12754.314 - 12804.726: 95.6117% ( 22) 00:07:24.110 12804.726 - 12855.138: 95.7580% ( 22) 00:07:24.110 12855.138 - 12905.551: 95.8843% ( 19) 00:07:24.110 12905.551 - 13006.375: 96.1702% ( 43) 00:07:24.110 13006.375 - 13107.200: 96.3697% ( 30) 00:07:24.110 13107.200 - 13208.025: 96.5559% ( 28) 00:07:24.110 13208.025 - 13308.849: 96.8218% ( 40) 00:07:24.110 13308.849 - 13409.674: 97.0080% ( 28) 00:07:24.110 13409.674 - 13510.498: 97.2473% ( 36) 00:07:24.110 13510.498 - 13611.323: 97.4269% ( 27) 00:07:24.110 13611.323 - 13712.148: 97.6064% ( 27) 00:07:24.110 13712.148 - 13812.972: 97.8191% ( 32) 00:07:24.110 13812.972 - 13913.797: 98.0319% ( 32) 00:07:24.110 13913.797 - 14014.622: 98.2247% ( 29) 00:07:24.110 14014.622 - 14115.446: 98.3710% ( 22) 00:07:24.110 14115.446 - 14216.271: 98.4641% ( 14) 00:07:24.110 14216.271 - 14317.095: 98.5372% ( 11) 00:07:24.110 14317.095 - 14417.920: 98.5838% ( 7) 00:07:24.110 14417.920 - 14518.745: 98.6237% ( 6) 00:07:24.110 14518.745 - 14619.569: 98.6636% ( 6) 00:07:24.110 14619.569 - 14720.394: 98.7035% ( 6) 00:07:24.110 14720.394 - 14821.218: 98.7234% ( 3) 00:07:24.110 15123.692 - 15224.517: 98.7367% ( 2) 00:07:24.110 15224.517 - 15325.342: 98.7766% ( 6) 00:07:24.110 15325.342 - 15426.166: 98.8165% ( 6) 00:07:24.110 15426.166 - 15526.991: 98.8564% ( 6) 00:07:24.110 15526.991 - 15627.815: 98.8963% ( 6) 00:07:24.110 15627.815 - 15728.640: 98.9428% ( 7) 00:07:24.110 15728.640 - 15829.465: 98.9827% ( 6) 00:07:24.110 15829.465 - 15930.289: 99.0226% ( 6) 00:07:24.110 15930.289 - 16031.114: 99.0625% ( 6) 00:07:24.110 16031.114 - 16131.938: 99.1090% ( 7) 00:07:24.110 16131.938 - 16232.763: 99.1489% ( 6) 00:07:24.110 17644.308 - 17745.132: 99.1689% ( 3) 00:07:24.110 17745.132 - 17845.957: 99.1955% ( 4) 00:07:24.110 17845.957 - 17946.782: 99.2221% ( 4) 00:07:24.110 17946.782 - 18047.606: 99.2420% ( 3) 00:07:24.110 18047.606 - 18148.431: 99.2686% ( 4) 00:07:24.110 18148.431 - 18249.255: 99.2952% ( 4) 00:07:24.110 18249.255 - 18350.080: 99.3152% ( 3) 00:07:24.110 18350.080 - 18450.905: 99.3418% ( 4) 00:07:24.110 18450.905 - 18551.729: 99.3617% ( 3) 00:07:24.110 18551.729 - 18652.554: 99.3883% ( 4) 00:07:24.110 18652.554 - 18753.378: 99.4149% ( 4) 00:07:24.110 18753.378 - 18854.203: 99.4415% ( 4) 00:07:24.110 18854.203 - 18955.028: 99.4681% ( 4) 00:07:24.110 18955.028 - 19055.852: 99.4947% ( 4) 00:07:24.110 19055.852 - 19156.677: 99.5213% ( 4) 00:07:24.110 19156.677 - 19257.502: 99.5479% ( 4) 00:07:24.110 19257.502 - 19358.326: 99.5678% ( 3) 00:07:24.110 19358.326 - 19459.151: 99.5745% ( 1) 00:07:24.110 23391.311 - 23492.135: 99.5878% ( 2) 00:07:24.110 23492.135 - 23592.960: 99.6144% ( 4) 00:07:24.110 23592.960 - 23693.785: 99.6343% ( 3) 00:07:24.110 23693.785 - 23794.609: 99.6609% ( 4) 00:07:24.110 23794.609 - 23895.434: 99.6809% ( 3) 00:07:24.110 23895.434 - 23996.258: 99.7008% ( 3) 00:07:24.110 23996.258 - 24097.083: 99.7207% ( 3) 00:07:24.110 24097.083 - 24197.908: 99.7407% ( 3) 00:07:24.110 24197.908 - 24298.732: 99.7673% ( 4) 00:07:24.110 24298.732 - 24399.557: 99.7939% ( 4) 00:07:24.110 24399.557 - 24500.382: 99.8138% ( 3) 00:07:24.110 24500.382 - 24601.206: 99.8404% ( 4) 00:07:24.110 24601.206 - 24702.031: 99.8670% ( 4) 00:07:24.110 24702.031 - 24802.855: 99.8870% ( 3) 00:07:24.110 24802.855 - 24903.680: 99.9136% ( 4) 00:07:24.110 24903.680 - 25004.505: 99.9402% ( 4) 00:07:24.110 25004.505 - 25105.329: 99.9601% ( 3) 00:07:24.110 25105.329 - 25206.154: 99.9867% ( 4) 00:07:24.110 25206.154 - 25306.978: 100.0000% ( 2) 00:07:24.110 00:07:24.110 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:24.110 ============================================================================== 00:07:24.110 Range in us Cumulative IO count 00:07:24.110 6099.889 - 6125.095: 0.0465% ( 7) 00:07:24.110 6125.095 - 6150.302: 0.2194% ( 26) 00:07:24.110 6150.302 - 6175.508: 0.6649% ( 67) 00:07:24.110 6175.508 - 6200.714: 1.1702% ( 76) 00:07:24.110 6200.714 - 6225.920: 1.6157% ( 67) 00:07:24.110 6225.920 - 6251.126: 2.3404% ( 109) 00:07:24.110 6251.126 - 6276.332: 3.1184% ( 117) 00:07:24.110 6276.332 - 6301.538: 3.9029% ( 118) 00:07:24.110 6301.538 - 6326.745: 4.7407% ( 126) 00:07:24.110 6326.745 - 6351.951: 5.5053% ( 115) 00:07:24.110 6351.951 - 6377.157: 6.3098% ( 121) 00:07:24.110 6377.157 - 6402.363: 7.2473% ( 141) 00:07:24.110 6402.363 - 6427.569: 8.1981% ( 143) 00:07:24.110 6427.569 - 6452.775: 9.3152% ( 168) 00:07:24.110 6452.775 - 6503.188: 11.5226% ( 332) 00:07:24.110 6503.188 - 6553.600: 13.6503% ( 320) 00:07:24.110 6553.600 - 6604.012: 16.0040% ( 354) 00:07:24.110 6604.012 - 6654.425: 18.4242% ( 364) 00:07:24.110 6654.425 - 6704.837: 20.8245% ( 361) 00:07:24.110 6704.837 - 6755.249: 23.1848% ( 355) 00:07:24.110 6755.249 - 6805.662: 25.6782% ( 375) 00:07:24.110 6805.662 - 6856.074: 28.0452% ( 356) 00:07:24.110 6856.074 - 6906.486: 30.5053% ( 370) 00:07:24.110 6906.486 - 6956.898: 32.9721% ( 371) 00:07:24.110 6956.898 - 7007.311: 35.5851% ( 393) 00:07:24.110 7007.311 - 7057.723: 37.9322% ( 353) 00:07:24.110 7057.723 - 7108.135: 39.8471% ( 288) 00:07:24.110 7108.135 - 7158.548: 41.3830% ( 231) 00:07:24.110 7158.548 - 7208.960: 42.7460% ( 205) 00:07:24.110 7208.960 - 7259.372: 44.0226% ( 192) 00:07:24.110 7259.372 - 7309.785: 45.1463% ( 169) 00:07:24.110 7309.785 - 7360.197: 46.1769% ( 155) 00:07:24.110 7360.197 - 7410.609: 46.9282% ( 113) 00:07:24.110 7410.609 - 7461.022: 47.6263% ( 105) 00:07:24.110 7461.022 - 7511.434: 48.1848% ( 84) 00:07:24.110 7511.434 - 7561.846: 48.7101% ( 79) 00:07:24.110 7561.846 - 7612.258: 49.3285% ( 93) 00:07:24.110 7612.258 - 7662.671: 49.8604% ( 80) 00:07:24.110 7662.671 - 7713.083: 50.3125% ( 68) 00:07:24.110 7713.083 - 7763.495: 50.7912% ( 72) 00:07:24.110 7763.495 - 7813.908: 51.2965% ( 76) 00:07:24.110 7813.908 - 7864.320: 51.7287% ( 65) 00:07:24.110 7864.320 - 7914.732: 52.1809% ( 68) 00:07:24.110 7914.732 - 7965.145: 52.5997% ( 63) 00:07:24.110 7965.145 - 8015.557: 52.9987% ( 60) 00:07:24.110 8015.557 - 8065.969: 53.4043% ( 61) 00:07:24.110 8065.969 - 8116.382: 53.8098% ( 61) 00:07:24.110 8116.382 - 8166.794: 54.2487% ( 66) 00:07:24.110 8166.794 - 8217.206: 54.6410% ( 59) 00:07:24.110 8217.206 - 8267.618: 55.0066% ( 55) 00:07:24.110 8267.618 - 8318.031: 55.4255% ( 63) 00:07:24.110 8318.031 - 8368.443: 55.7979% ( 56) 00:07:24.110 8368.443 - 8418.855: 56.2367% ( 66) 00:07:24.110 8418.855 - 8469.268: 56.6290% ( 59) 00:07:24.110 8469.268 - 8519.680: 56.9947% ( 55) 00:07:24.110 8519.680 - 8570.092: 57.4136% ( 63) 00:07:24.110 8570.092 - 8620.505: 57.7660% ( 53) 00:07:24.110 8620.505 - 8670.917: 58.1715% ( 61) 00:07:24.110 8670.917 - 8721.329: 58.6503% ( 72) 00:07:24.110 8721.329 - 8771.742: 59.1689% ( 78) 00:07:24.110 8771.742 - 8822.154: 59.7739% ( 91) 00:07:24.110 8822.154 - 8872.566: 60.5519% ( 117) 00:07:24.111 8872.566 - 8922.978: 61.3165% ( 115) 00:07:24.111 8922.978 - 8973.391: 62.1410% ( 124) 00:07:24.111 8973.391 - 9023.803: 62.9588% ( 123) 00:07:24.111 9023.803 - 9074.215: 63.8697% ( 137) 00:07:24.111 9074.215 - 9124.628: 64.8138% ( 142) 00:07:24.111 9124.628 - 9175.040: 65.7114% ( 135) 00:07:24.111 9175.040 - 9225.452: 66.6024% ( 134) 00:07:24.111 9225.452 - 9275.865: 67.5066% ( 136) 00:07:24.111 9275.865 - 9326.277: 68.4109% ( 136) 00:07:24.111 9326.277 - 9376.689: 69.2686% ( 129) 00:07:24.111 9376.689 - 9427.102: 70.1529% ( 133) 00:07:24.111 9427.102 - 9477.514: 71.0439% ( 134) 00:07:24.111 9477.514 - 9527.926: 71.9947% ( 143) 00:07:24.111 9527.926 - 9578.338: 72.8989% ( 136) 00:07:24.111 9578.338 - 9628.751: 73.8364% ( 141) 00:07:24.111 9628.751 - 9679.163: 74.7739% ( 141) 00:07:24.111 9679.163 - 9729.575: 75.5918% ( 123) 00:07:24.111 9729.575 - 9779.988: 76.3364% ( 112) 00:07:24.111 9779.988 - 9830.400: 77.0213% ( 103) 00:07:24.111 9830.400 - 9880.812: 77.6529% ( 95) 00:07:24.111 9880.812 - 9931.225: 78.2979% ( 97) 00:07:24.111 9931.225 - 9981.637: 78.8896% ( 89) 00:07:24.111 9981.637 - 10032.049: 79.4614% ( 86) 00:07:24.111 10032.049 - 10082.462: 79.9668% ( 76) 00:07:24.111 10082.462 - 10132.874: 80.4588% ( 74) 00:07:24.111 10132.874 - 10183.286: 80.9574% ( 75) 00:07:24.111 10183.286 - 10233.698: 81.4362% ( 72) 00:07:24.111 10233.698 - 10284.111: 81.9016% ( 70) 00:07:24.111 10284.111 - 10334.523: 82.4335% ( 80) 00:07:24.111 10334.523 - 10384.935: 82.9255% ( 74) 00:07:24.111 10384.935 - 10435.348: 83.4508% ( 79) 00:07:24.111 10435.348 - 10485.760: 83.9761% ( 79) 00:07:24.111 10485.760 - 10536.172: 84.4548% ( 72) 00:07:24.111 10536.172 - 10586.585: 84.9003% ( 67) 00:07:24.111 10586.585 - 10636.997: 85.4255% ( 79) 00:07:24.111 10636.997 - 10687.409: 85.9441% ( 78) 00:07:24.111 10687.409 - 10737.822: 86.4960% ( 83) 00:07:24.111 10737.822 - 10788.234: 86.9681% ( 71) 00:07:24.111 10788.234 - 10838.646: 87.4202% ( 68) 00:07:24.111 10838.646 - 10889.058: 87.8723% ( 68) 00:07:24.111 10889.058 - 10939.471: 88.3444% ( 71) 00:07:24.111 10939.471 - 10989.883: 88.7832% ( 66) 00:07:24.111 10989.883 - 11040.295: 89.1822% ( 60) 00:07:24.111 11040.295 - 11090.708: 89.5944% ( 62) 00:07:24.111 11090.708 - 11141.120: 89.9269% ( 50) 00:07:24.111 11141.120 - 11191.532: 90.2793% ( 53) 00:07:24.111 11191.532 - 11241.945: 90.6449% ( 55) 00:07:24.111 11241.945 - 11292.357: 90.9907% ( 52) 00:07:24.111 11292.357 - 11342.769: 91.2633% ( 41) 00:07:24.111 11342.769 - 11393.182: 91.5093% ( 37) 00:07:24.111 11393.182 - 11443.594: 91.7420% ( 35) 00:07:24.111 11443.594 - 11494.006: 91.9681% ( 34) 00:07:24.111 11494.006 - 11544.418: 92.1941% ( 34) 00:07:24.111 11544.418 - 11594.831: 92.3737% ( 27) 00:07:24.111 11594.831 - 11645.243: 92.5665% ( 29) 00:07:24.111 11645.243 - 11695.655: 92.6928% ( 19) 00:07:24.111 11695.655 - 11746.068: 92.8324% ( 21) 00:07:24.111 11746.068 - 11796.480: 92.9588% ( 19) 00:07:24.111 11796.480 - 11846.892: 93.0918% ( 20) 00:07:24.111 11846.892 - 11897.305: 93.2314% ( 21) 00:07:24.111 11897.305 - 11947.717: 93.3511% ( 18) 00:07:24.111 11947.717 - 11998.129: 93.4375% ( 13) 00:07:24.111 11998.129 - 12048.542: 93.5439% ( 16) 00:07:24.111 12048.542 - 12098.954: 93.6636% ( 18) 00:07:24.111 12098.954 - 12149.366: 93.7633% ( 15) 00:07:24.111 12149.366 - 12199.778: 93.8963% ( 20) 00:07:24.111 12199.778 - 12250.191: 94.0359% ( 21) 00:07:24.111 12250.191 - 12300.603: 94.1622% ( 19) 00:07:24.111 12300.603 - 12351.015: 94.2886% ( 19) 00:07:24.111 12351.015 - 12401.428: 94.4215% ( 20) 00:07:24.111 12401.428 - 12451.840: 94.5678% ( 22) 00:07:24.111 12451.840 - 12502.252: 94.7207% ( 23) 00:07:24.111 12502.252 - 12552.665: 94.8271% ( 16) 00:07:24.111 12552.665 - 12603.077: 94.9468% ( 18) 00:07:24.111 12603.077 - 12653.489: 95.0798% ( 20) 00:07:24.111 12653.489 - 12703.902: 95.1928% ( 17) 00:07:24.111 12703.902 - 12754.314: 95.3324% ( 21) 00:07:24.111 12754.314 - 12804.726: 95.4654% ( 20) 00:07:24.111 12804.726 - 12855.138: 95.5851% ( 18) 00:07:24.111 12855.138 - 12905.551: 95.7114% ( 19) 00:07:24.111 12905.551 - 13006.375: 95.9508% ( 36) 00:07:24.111 13006.375 - 13107.200: 96.1835% ( 35) 00:07:24.111 13107.200 - 13208.025: 96.3830% ( 30) 00:07:24.111 13208.025 - 13308.849: 96.5691% ( 28) 00:07:24.111 13308.849 - 13409.674: 96.7154% ( 22) 00:07:24.111 13409.674 - 13510.498: 96.8883% ( 26) 00:07:24.111 13510.498 - 13611.323: 97.0745% ( 28) 00:07:24.111 13611.323 - 13712.148: 97.2872% ( 32) 00:07:24.111 13712.148 - 13812.972: 97.4668% ( 27) 00:07:24.111 13812.972 - 13913.797: 97.6463% ( 27) 00:07:24.111 13913.797 - 14014.622: 97.9056% ( 39) 00:07:24.111 14014.622 - 14115.446: 98.1316% ( 34) 00:07:24.111 14115.446 - 14216.271: 98.2912% ( 24) 00:07:24.111 14216.271 - 14317.095: 98.4574% ( 25) 00:07:24.111 14317.095 - 14417.920: 98.5771% ( 18) 00:07:24.111 14417.920 - 14518.745: 98.6702% ( 14) 00:07:24.111 14518.745 - 14619.569: 98.7500% ( 12) 00:07:24.111 14619.569 - 14720.394: 98.8364% ( 13) 00:07:24.111 14720.394 - 14821.218: 98.9096% ( 11) 00:07:24.111 14821.218 - 14922.043: 98.9894% ( 12) 00:07:24.111 14922.043 - 15022.868: 99.0293% ( 6) 00:07:24.111 15022.868 - 15123.692: 99.0691% ( 6) 00:07:24.111 15123.692 - 15224.517: 99.1090% ( 6) 00:07:24.111 15224.517 - 15325.342: 99.1489% ( 6) 00:07:24.111 16333.588 - 16434.412: 99.1556% ( 1) 00:07:24.111 16434.412 - 16535.237: 99.1755% ( 3) 00:07:24.111 16535.237 - 16636.062: 99.1888% ( 2) 00:07:24.111 16636.062 - 16736.886: 99.2088% ( 3) 00:07:24.111 16736.886 - 16837.711: 99.2221% ( 2) 00:07:24.111 16837.711 - 16938.535: 99.2487% ( 4) 00:07:24.111 16938.535 - 17039.360: 99.2686% ( 3) 00:07:24.111 17039.360 - 17140.185: 99.2952% ( 4) 00:07:24.111 17140.185 - 17241.009: 99.3152% ( 3) 00:07:24.111 17241.009 - 17341.834: 99.3418% ( 4) 00:07:24.111 17341.834 - 17442.658: 99.3684% ( 4) 00:07:24.111 17442.658 - 17543.483: 99.3883% ( 3) 00:07:24.111 17543.483 - 17644.308: 99.4149% ( 4) 00:07:24.111 17644.308 - 17745.132: 99.4415% ( 4) 00:07:24.111 17745.132 - 17845.957: 99.4681% ( 4) 00:07:24.111 17845.957 - 17946.782: 99.4880% ( 3) 00:07:24.111 17946.782 - 18047.606: 99.5146% ( 4) 00:07:24.111 18047.606 - 18148.431: 99.5412% ( 4) 00:07:24.111 18148.431 - 18249.255: 99.5678% ( 4) 00:07:24.111 18249.255 - 18350.080: 99.5745% ( 1) 00:07:24.111 21979.766 - 22080.591: 99.5811% ( 1) 00:07:24.111 22080.591 - 22181.415: 99.6011% ( 3) 00:07:24.111 22181.415 - 22282.240: 99.6277% ( 4) 00:07:24.111 22282.240 - 22383.065: 99.6476% ( 3) 00:07:24.111 22383.065 - 22483.889: 99.6742% ( 4) 00:07:24.111 22483.889 - 22584.714: 99.6941% ( 3) 00:07:24.111 22584.714 - 22685.538: 99.7207% ( 4) 00:07:24.111 22685.538 - 22786.363: 99.7407% ( 3) 00:07:24.111 22786.363 - 22887.188: 99.7673% ( 4) 00:07:24.111 22887.188 - 22988.012: 99.7939% ( 4) 00:07:24.111 22988.012 - 23088.837: 99.8138% ( 3) 00:07:24.111 23088.837 - 23189.662: 99.8404% ( 4) 00:07:24.111 23189.662 - 23290.486: 99.8604% ( 3) 00:07:24.111 23290.486 - 23391.311: 99.8870% ( 4) 00:07:24.111 23391.311 - 23492.135: 99.9069% ( 3) 00:07:24.111 23492.135 - 23592.960: 99.9269% ( 3) 00:07:24.111 23592.960 - 23693.785: 99.9535% ( 4) 00:07:24.111 23693.785 - 23794.609: 99.9801% ( 4) 00:07:24.112 23794.609 - 23895.434: 100.0000% ( 3) 00:07:24.112 00:07:24.112 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:24.112 ============================================================================== 00:07:24.112 Range in us Cumulative IO count 00:07:24.112 6074.683 - 6099.889: 0.0133% ( 2) 00:07:24.112 6099.889 - 6125.095: 0.0266% ( 2) 00:07:24.112 6125.095 - 6150.302: 0.0465% ( 3) 00:07:24.112 6150.302 - 6175.508: 0.2527% ( 31) 00:07:24.112 6175.508 - 6200.714: 0.6184% ( 55) 00:07:24.112 6200.714 - 6225.920: 1.1769% ( 84) 00:07:24.112 6225.920 - 6251.126: 1.9149% ( 111) 00:07:24.112 6251.126 - 6276.332: 2.7726% ( 129) 00:07:24.112 6276.332 - 6301.538: 3.8963% ( 169) 00:07:24.112 6301.538 - 6326.745: 4.7939% ( 135) 00:07:24.112 6326.745 - 6351.951: 5.6649% ( 131) 00:07:24.112 6351.951 - 6377.157: 6.4827% ( 123) 00:07:24.112 6377.157 - 6402.363: 7.3604% ( 132) 00:07:24.112 6402.363 - 6427.569: 8.3178% ( 144) 00:07:24.112 6427.569 - 6452.775: 9.3750% ( 159) 00:07:24.112 6452.775 - 6503.188: 11.4561% ( 313) 00:07:24.112 6503.188 - 6553.600: 13.6104% ( 324) 00:07:24.112 6553.600 - 6604.012: 15.6981% ( 314) 00:07:24.112 6604.012 - 6654.425: 18.0386% ( 352) 00:07:24.112 6654.425 - 6704.837: 20.4920% ( 369) 00:07:24.112 6704.837 - 6755.249: 22.8723% ( 358) 00:07:24.112 6755.249 - 6805.662: 25.4056% ( 381) 00:07:24.112 6805.662 - 6856.074: 27.8723% ( 371) 00:07:24.112 6856.074 - 6906.486: 30.2726% ( 361) 00:07:24.112 6906.486 - 6956.898: 32.7527% ( 373) 00:07:24.112 6956.898 - 7007.311: 35.2327% ( 373) 00:07:24.112 7007.311 - 7057.723: 37.6862% ( 369) 00:07:24.112 7057.723 - 7108.135: 39.6011% ( 288) 00:07:24.112 7108.135 - 7158.548: 41.1503% ( 233) 00:07:24.112 7158.548 - 7208.960: 42.6130% ( 220) 00:07:24.112 7208.960 - 7259.372: 43.9495% ( 201) 00:07:24.112 7259.372 - 7309.785: 45.1263% ( 177) 00:07:24.112 7309.785 - 7360.197: 46.0838% ( 144) 00:07:24.112 7360.197 - 7410.609: 46.8750% ( 119) 00:07:24.112 7410.609 - 7461.022: 47.5864% ( 107) 00:07:24.112 7461.022 - 7511.434: 48.2912% ( 106) 00:07:24.112 7511.434 - 7561.846: 48.8231% ( 80) 00:07:24.112 7561.846 - 7612.258: 49.3085% ( 73) 00:07:24.112 7612.258 - 7662.671: 49.6742% ( 55) 00:07:24.112 7662.671 - 7713.083: 50.0997% ( 64) 00:07:24.112 7713.083 - 7763.495: 50.6117% ( 77) 00:07:24.112 7763.495 - 7813.908: 51.1636% ( 83) 00:07:24.112 7813.908 - 7864.320: 51.8218% ( 99) 00:07:24.112 7864.320 - 7914.732: 52.3670% ( 82) 00:07:24.112 7914.732 - 7965.145: 52.7859% ( 63) 00:07:24.112 7965.145 - 8015.557: 53.2181% ( 65) 00:07:24.112 8015.557 - 8065.969: 53.6370% ( 63) 00:07:24.112 8065.969 - 8116.382: 54.0027% ( 55) 00:07:24.112 8116.382 - 8166.794: 54.3816% ( 57) 00:07:24.112 8166.794 - 8217.206: 54.7074% ( 49) 00:07:24.112 8217.206 - 8267.618: 55.0665% ( 54) 00:07:24.112 8267.618 - 8318.031: 55.4322% ( 55) 00:07:24.112 8318.031 - 8368.443: 55.8311% ( 60) 00:07:24.112 8368.443 - 8418.855: 56.2301% ( 60) 00:07:24.112 8418.855 - 8469.268: 56.6423% ( 62) 00:07:24.112 8469.268 - 8519.680: 57.0545% ( 62) 00:07:24.112 8519.680 - 8570.092: 57.5000% ( 67) 00:07:24.112 8570.092 - 8620.505: 58.0186% ( 78) 00:07:24.112 8620.505 - 8670.917: 58.4375% ( 63) 00:07:24.112 8670.917 - 8721.329: 58.9495% ( 77) 00:07:24.112 8721.329 - 8771.742: 59.4814% ( 80) 00:07:24.112 8771.742 - 8822.154: 60.0133% ( 80) 00:07:24.112 8822.154 - 8872.566: 60.4920% ( 72) 00:07:24.112 8872.566 - 8922.978: 61.1902% ( 105) 00:07:24.112 8922.978 - 8973.391: 61.9947% ( 121) 00:07:24.112 8973.391 - 9023.803: 62.7992% ( 121) 00:07:24.112 9023.803 - 9074.215: 63.6968% ( 135) 00:07:24.112 9074.215 - 9124.628: 64.6277% ( 140) 00:07:24.112 9124.628 - 9175.040: 65.5585% ( 140) 00:07:24.112 9175.040 - 9225.452: 66.5226% ( 145) 00:07:24.112 9225.452 - 9275.865: 67.5598% ( 156) 00:07:24.112 9275.865 - 9326.277: 68.5306% ( 146) 00:07:24.112 9326.277 - 9376.689: 69.4282% ( 135) 00:07:24.112 9376.689 - 9427.102: 70.4189% ( 149) 00:07:24.112 9427.102 - 9477.514: 71.3165% ( 135) 00:07:24.112 9477.514 - 9527.926: 72.3604% ( 157) 00:07:24.112 9527.926 - 9578.338: 73.3444% ( 148) 00:07:24.112 9578.338 - 9628.751: 74.1888% ( 127) 00:07:24.112 9628.751 - 9679.163: 75.0798% ( 134) 00:07:24.112 9679.163 - 9729.575: 75.8577% ( 117) 00:07:24.112 9729.575 - 9779.988: 76.6622% ( 121) 00:07:24.112 9779.988 - 9830.400: 77.4136% ( 113) 00:07:24.112 9830.400 - 9880.812: 78.1649% ( 113) 00:07:24.112 9880.812 - 9931.225: 78.7832% ( 93) 00:07:24.112 9931.225 - 9981.637: 79.3152% ( 80) 00:07:24.112 9981.637 - 10032.049: 79.8471% ( 80) 00:07:24.112 10032.049 - 10082.462: 80.2793% ( 65) 00:07:24.112 10082.462 - 10132.874: 80.7247% ( 67) 00:07:24.112 10132.874 - 10183.286: 81.2633% ( 81) 00:07:24.112 10183.286 - 10233.698: 81.6755% ( 62) 00:07:24.112 10233.698 - 10284.111: 82.0479% ( 56) 00:07:24.112 10284.111 - 10334.523: 82.4202% ( 56) 00:07:24.112 10334.523 - 10384.935: 82.7726% ( 53) 00:07:24.112 10384.935 - 10435.348: 83.1516% ( 57) 00:07:24.112 10435.348 - 10485.760: 83.5971% ( 67) 00:07:24.112 10485.760 - 10536.172: 84.0957% ( 75) 00:07:24.112 10536.172 - 10586.585: 84.6077% ( 77) 00:07:24.112 10586.585 - 10636.997: 85.1130% ( 76) 00:07:24.112 10636.997 - 10687.409: 85.6582% ( 82) 00:07:24.112 10687.409 - 10737.822: 86.1902% ( 80) 00:07:24.112 10737.822 - 10788.234: 86.6822% ( 74) 00:07:24.112 10788.234 - 10838.646: 87.2008% ( 78) 00:07:24.112 10838.646 - 10889.058: 87.6995% ( 75) 00:07:24.112 10889.058 - 10939.471: 88.1383% ( 66) 00:07:24.112 10939.471 - 10989.883: 88.5372% ( 60) 00:07:24.112 10989.883 - 11040.295: 88.9362% ( 60) 00:07:24.112 11040.295 - 11090.708: 89.3218% ( 58) 00:07:24.112 11090.708 - 11141.120: 89.7207% ( 60) 00:07:24.112 11141.120 - 11191.532: 90.0931% ( 56) 00:07:24.112 11191.532 - 11241.945: 90.5053% ( 62) 00:07:24.112 11241.945 - 11292.357: 90.8644% ( 54) 00:07:24.112 11292.357 - 11342.769: 91.2101% ( 52) 00:07:24.112 11342.769 - 11393.182: 91.5160% ( 46) 00:07:24.112 11393.182 - 11443.594: 91.7487% ( 35) 00:07:24.112 11443.594 - 11494.006: 91.9747% ( 34) 00:07:24.112 11494.006 - 11544.418: 92.1809% ( 31) 00:07:24.112 11544.418 - 11594.831: 92.4202% ( 36) 00:07:24.112 11594.831 - 11645.243: 92.5931% ( 26) 00:07:24.112 11645.243 - 11695.655: 92.7527% ( 24) 00:07:24.112 11695.655 - 11746.068: 92.8457% ( 14) 00:07:24.112 11746.068 - 11796.480: 92.9255% ( 12) 00:07:24.112 11796.480 - 11846.892: 93.0452% ( 18) 00:07:24.112 11846.892 - 11897.305: 93.1582% ( 17) 00:07:24.112 11897.305 - 11947.717: 93.2447% ( 13) 00:07:24.112 11947.717 - 11998.129: 93.3644% ( 18) 00:07:24.112 11998.129 - 12048.542: 93.5040% ( 21) 00:07:24.112 12048.542 - 12098.954: 93.6835% ( 27) 00:07:24.112 12098.954 - 12149.366: 93.9362% ( 38) 00:07:24.112 12149.366 - 12199.778: 94.1423% ( 31) 00:07:24.112 12199.778 - 12250.191: 94.2952% ( 23) 00:07:24.112 12250.191 - 12300.603: 94.4215% ( 19) 00:07:24.112 12300.603 - 12351.015: 94.5479% ( 19) 00:07:24.112 12351.015 - 12401.428: 94.7274% ( 27) 00:07:24.112 12401.428 - 12451.840: 94.8537% ( 19) 00:07:24.112 12451.840 - 12502.252: 94.9734% ( 18) 00:07:24.112 12502.252 - 12552.665: 95.0997% ( 19) 00:07:24.112 12552.665 - 12603.077: 95.2194% ( 18) 00:07:24.112 12603.077 - 12653.489: 95.3391% ( 18) 00:07:24.112 12653.489 - 12703.902: 95.4455% ( 16) 00:07:24.112 12703.902 - 12754.314: 95.5652% ( 18) 00:07:24.112 12754.314 - 12804.726: 95.7247% ( 24) 00:07:24.112 12804.726 - 12855.138: 95.8311% ( 16) 00:07:24.112 12855.138 - 12905.551: 95.9574% ( 19) 00:07:24.112 12905.551 - 13006.375: 96.2035% ( 37) 00:07:24.112 13006.375 - 13107.200: 96.3830% ( 27) 00:07:24.112 13107.200 - 13208.025: 96.5824% ( 30) 00:07:24.112 13208.025 - 13308.849: 96.7819% ( 30) 00:07:24.112 13308.849 - 13409.674: 96.9282% ( 22) 00:07:24.112 13409.674 - 13510.498: 97.0479% ( 18) 00:07:24.112 13510.498 - 13611.323: 97.1543% ( 16) 00:07:24.112 13611.323 - 13712.148: 97.2673% ( 17) 00:07:24.112 13712.148 - 13812.972: 97.3870% ( 18) 00:07:24.112 13812.972 - 13913.797: 97.4668% ( 12) 00:07:24.112 13913.797 - 14014.622: 97.5731% ( 16) 00:07:24.112 14014.622 - 14115.446: 97.7327% ( 24) 00:07:24.112 14115.446 - 14216.271: 97.8524% ( 18) 00:07:24.112 14216.271 - 14317.095: 97.9787% ( 19) 00:07:24.112 14317.095 - 14417.920: 98.1051% ( 19) 00:07:24.112 14417.920 - 14518.745: 98.3178% ( 32) 00:07:24.112 14518.745 - 14619.569: 98.5173% ( 30) 00:07:24.112 14619.569 - 14720.394: 98.6569% ( 21) 00:07:24.112 14720.394 - 14821.218: 98.8098% ( 23) 00:07:24.112 14821.218 - 14922.043: 98.9495% ( 21) 00:07:24.112 14922.043 - 15022.868: 99.1024% ( 23) 00:07:24.112 15022.868 - 15123.692: 99.2154% ( 17) 00:07:24.112 15123.692 - 15224.517: 99.2753% ( 9) 00:07:24.112 15224.517 - 15325.342: 99.3484% ( 11) 00:07:24.112 15325.342 - 15426.166: 99.3684% ( 3) 00:07:24.112 15426.166 - 15526.991: 99.3949% ( 4) 00:07:24.112 15526.991 - 15627.815: 99.4215% ( 4) 00:07:24.112 15627.815 - 15728.640: 99.4481% ( 4) 00:07:24.112 15728.640 - 15829.465: 99.4747% ( 4) 00:07:24.112 15829.465 - 15930.289: 99.5013% ( 4) 00:07:24.112 15930.289 - 16031.114: 99.5279% ( 4) 00:07:24.112 16031.114 - 16131.938: 99.5479% ( 3) 00:07:24.112 16131.938 - 16232.763: 99.5545% ( 1) 00:07:24.113 16232.763 - 16333.588: 99.5745% ( 3) 00:07:24.113 20164.923 - 20265.748: 99.5878% ( 2) 00:07:24.113 20265.748 - 20366.572: 99.6077% ( 3) 00:07:24.113 20366.572 - 20467.397: 99.6343% ( 4) 00:07:24.113 20467.397 - 20568.222: 99.6543% ( 3) 00:07:24.113 20568.222 - 20669.046: 99.6742% ( 3) 00:07:24.113 20669.046 - 20769.871: 99.6941% ( 3) 00:07:24.113 20769.871 - 20870.695: 99.7207% ( 4) 00:07:24.113 20870.695 - 20971.520: 99.7407% ( 3) 00:07:24.113 20971.520 - 21072.345: 99.7673% ( 4) 00:07:24.113 21072.345 - 21173.169: 99.7872% ( 3) 00:07:24.113 21173.169 - 21273.994: 99.8138% ( 4) 00:07:24.113 21273.994 - 21374.818: 99.8338% ( 3) 00:07:24.113 21374.818 - 21475.643: 99.8604% ( 4) 00:07:24.113 21475.643 - 21576.468: 99.8870% ( 4) 00:07:24.113 21576.468 - 21677.292: 99.9069% ( 3) 00:07:24.113 21677.292 - 21778.117: 99.9335% ( 4) 00:07:24.113 21778.117 - 21878.942: 99.9535% ( 3) 00:07:24.113 21878.942 - 21979.766: 99.9801% ( 4) 00:07:24.113 21979.766 - 22080.591: 100.0000% ( 3) 00:07:24.113 00:07:24.113 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:24.113 ============================================================================== 00:07:24.113 Range in us Cumulative IO count 00:07:24.113 6099.889 - 6125.095: 0.0066% ( 1) 00:07:24.113 6125.095 - 6150.302: 0.0931% ( 13) 00:07:24.113 6150.302 - 6175.508: 0.2394% ( 22) 00:07:24.113 6175.508 - 6200.714: 0.6383% ( 60) 00:07:24.113 6200.714 - 6225.920: 1.3364% ( 105) 00:07:24.113 6225.920 - 6251.126: 2.2939% ( 144) 00:07:24.113 6251.126 - 6276.332: 3.3644% ( 161) 00:07:24.113 6276.332 - 6301.538: 4.1489% ( 118) 00:07:24.113 6301.538 - 6326.745: 4.8537% ( 106) 00:07:24.113 6326.745 - 6351.951: 5.5785% ( 109) 00:07:24.113 6351.951 - 6377.157: 6.4694% ( 134) 00:07:24.113 6377.157 - 6402.363: 7.3803% ( 137) 00:07:24.113 6402.363 - 6427.569: 8.2713% ( 134) 00:07:24.113 6427.569 - 6452.775: 9.1955% ( 139) 00:07:24.113 6452.775 - 6503.188: 11.2367% ( 307) 00:07:24.113 6503.188 - 6553.600: 13.3910% ( 324) 00:07:24.113 6553.600 - 6604.012: 15.6582% ( 341) 00:07:24.113 6604.012 - 6654.425: 18.0851% ( 365) 00:07:24.113 6654.425 - 6704.837: 20.5186% ( 366) 00:07:24.113 6704.837 - 6755.249: 22.8524% ( 351) 00:07:24.113 6755.249 - 6805.662: 25.1862% ( 351) 00:07:24.113 6805.662 - 6856.074: 27.6130% ( 365) 00:07:24.113 6856.074 - 6906.486: 30.0931% ( 373) 00:07:24.113 6906.486 - 6956.898: 32.5798% ( 374) 00:07:24.113 6956.898 - 7007.311: 35.0997% ( 379) 00:07:24.113 7007.311 - 7057.723: 37.5199% ( 364) 00:07:24.113 7057.723 - 7108.135: 39.4814% ( 295) 00:07:24.113 7108.135 - 7158.548: 41.0306% ( 233) 00:07:24.113 7158.548 - 7208.960: 42.5465% ( 228) 00:07:24.113 7208.960 - 7259.372: 43.7899% ( 187) 00:07:24.113 7259.372 - 7309.785: 44.8737% ( 163) 00:07:24.113 7309.785 - 7360.197: 45.7048% ( 125) 00:07:24.113 7360.197 - 7410.609: 46.4229% ( 108) 00:07:24.113 7410.609 - 7461.022: 47.1609% ( 111) 00:07:24.113 7461.022 - 7511.434: 47.7660% ( 91) 00:07:24.113 7511.434 - 7561.846: 48.3178% ( 83) 00:07:24.113 7561.846 - 7612.258: 48.9029% ( 88) 00:07:24.113 7612.258 - 7662.671: 49.3816% ( 72) 00:07:24.113 7662.671 - 7713.083: 49.8936% ( 77) 00:07:24.113 7713.083 - 7763.495: 50.4056% ( 77) 00:07:24.113 7763.495 - 7813.908: 50.9109% ( 76) 00:07:24.113 7813.908 - 7864.320: 51.3697% ( 69) 00:07:24.113 7864.320 - 7914.732: 51.8484% ( 72) 00:07:24.113 7914.732 - 7965.145: 52.3072% ( 69) 00:07:24.113 7965.145 - 8015.557: 52.7726% ( 70) 00:07:24.113 8015.557 - 8065.969: 53.2114% ( 66) 00:07:24.113 8065.969 - 8116.382: 53.6835% ( 71) 00:07:24.113 8116.382 - 8166.794: 54.0891% ( 61) 00:07:24.113 8166.794 - 8217.206: 54.5080% ( 63) 00:07:24.113 8217.206 - 8267.618: 54.9801% ( 71) 00:07:24.113 8267.618 - 8318.031: 55.3790% ( 60) 00:07:24.113 8318.031 - 8368.443: 55.8710% ( 74) 00:07:24.113 8368.443 - 8418.855: 56.3098% ( 66) 00:07:24.113 8418.855 - 8469.268: 56.7952% ( 73) 00:07:24.113 8469.268 - 8519.680: 57.2274% ( 65) 00:07:24.113 8519.680 - 8570.092: 57.6928% ( 70) 00:07:24.113 8570.092 - 8620.505: 58.1316% ( 66) 00:07:24.113 8620.505 - 8670.917: 58.5439% ( 62) 00:07:24.113 8670.917 - 8721.329: 58.9894% ( 67) 00:07:24.113 8721.329 - 8771.742: 59.5279% ( 81) 00:07:24.113 8771.742 - 8822.154: 60.0598% ( 80) 00:07:24.113 8822.154 - 8872.566: 60.6051% ( 82) 00:07:24.113 8872.566 - 8922.978: 61.2234% ( 93) 00:07:24.113 8922.978 - 8973.391: 61.9814% ( 114) 00:07:24.113 8973.391 - 9023.803: 62.8856% ( 136) 00:07:24.113 9023.803 - 9074.215: 63.7899% ( 136) 00:07:24.113 9074.215 - 9124.628: 64.6875% ( 135) 00:07:24.113 9124.628 - 9175.040: 65.6649% ( 147) 00:07:24.113 9175.040 - 9225.452: 66.7021% ( 156) 00:07:24.113 9225.452 - 9275.865: 67.6130% ( 137) 00:07:24.113 9275.865 - 9326.277: 68.4907% ( 132) 00:07:24.113 9326.277 - 9376.689: 69.4415% ( 143) 00:07:24.113 9376.689 - 9427.102: 70.4521% ( 152) 00:07:24.113 9427.102 - 9477.514: 71.4694% ( 153) 00:07:24.113 9477.514 - 9527.926: 72.4069% ( 141) 00:07:24.113 9527.926 - 9578.338: 73.3910% ( 148) 00:07:24.113 9578.338 - 9628.751: 74.4215% ( 155) 00:07:24.113 9628.751 - 9679.163: 75.3923% ( 146) 00:07:24.113 9679.163 - 9729.575: 76.2633% ( 131) 00:07:24.113 9729.575 - 9779.988: 77.1343% ( 131) 00:07:24.113 9779.988 - 9830.400: 77.9056% ( 116) 00:07:24.113 9830.400 - 9880.812: 78.6902% ( 118) 00:07:24.113 9880.812 - 9931.225: 79.3351% ( 97) 00:07:24.113 9931.225 - 9981.637: 79.9269% ( 89) 00:07:24.113 9981.637 - 10032.049: 80.5186% ( 89) 00:07:24.113 10032.049 - 10082.462: 80.9973% ( 72) 00:07:24.113 10082.462 - 10132.874: 81.5160% ( 78) 00:07:24.113 10132.874 - 10183.286: 82.0213% ( 76) 00:07:24.113 10183.286 - 10233.698: 82.4335% ( 62) 00:07:24.113 10233.698 - 10284.111: 82.7726% ( 51) 00:07:24.113 10284.111 - 10334.523: 83.1649% ( 59) 00:07:24.113 10334.523 - 10384.935: 83.5505% ( 58) 00:07:24.113 10384.935 - 10435.348: 83.8630% ( 47) 00:07:24.113 10435.348 - 10485.760: 84.1955% ( 50) 00:07:24.113 10485.760 - 10536.172: 84.6410% ( 67) 00:07:24.113 10536.172 - 10586.585: 85.0731% ( 65) 00:07:24.113 10586.585 - 10636.997: 85.5186% ( 67) 00:07:24.113 10636.997 - 10687.409: 85.9907% ( 71) 00:07:24.113 10687.409 - 10737.822: 86.4162% ( 64) 00:07:24.113 10737.822 - 10788.234: 86.8750% ( 69) 00:07:24.113 10788.234 - 10838.646: 87.2739% ( 60) 00:07:24.113 10838.646 - 10889.058: 87.6795% ( 61) 00:07:24.113 10889.058 - 10939.471: 88.0851% ( 61) 00:07:24.113 10939.471 - 10989.883: 88.4973% ( 62) 00:07:24.113 10989.883 - 11040.295: 88.9029% ( 61) 00:07:24.113 11040.295 - 11090.708: 89.2819% ( 57) 00:07:24.113 11090.708 - 11141.120: 89.6676% ( 58) 00:07:24.113 11141.120 - 11191.532: 90.0399% ( 56) 00:07:24.113 11191.532 - 11241.945: 90.4122% ( 56) 00:07:24.113 11241.945 - 11292.357: 90.7447% ( 50) 00:07:24.113 11292.357 - 11342.769: 91.0904% ( 52) 00:07:24.113 11342.769 - 11393.182: 91.4229% ( 50) 00:07:24.113 11393.182 - 11443.594: 91.6755% ( 38) 00:07:24.113 11443.594 - 11494.006: 91.9149% ( 36) 00:07:24.113 11494.006 - 11544.418: 92.1144% ( 30) 00:07:24.113 11544.418 - 11594.831: 92.3005% ( 28) 00:07:24.113 11594.831 - 11645.243: 92.5199% ( 33) 00:07:24.113 11645.243 - 11695.655: 92.7194% ( 30) 00:07:24.113 11695.655 - 11746.068: 92.8856% ( 25) 00:07:24.113 11746.068 - 11796.480: 93.0519% ( 25) 00:07:24.113 11796.480 - 11846.892: 93.1915% ( 21) 00:07:24.113 11846.892 - 11897.305: 93.3245% ( 20) 00:07:24.113 11897.305 - 11947.717: 93.4441% ( 18) 00:07:24.113 11947.717 - 11998.129: 93.5572% ( 17) 00:07:24.113 11998.129 - 12048.542: 93.6835% ( 19) 00:07:24.113 12048.542 - 12098.954: 93.7899% ( 16) 00:07:24.113 12098.954 - 12149.366: 93.8963% ( 16) 00:07:24.113 12149.366 - 12199.778: 93.9960% ( 15) 00:07:24.113 12199.778 - 12250.191: 94.1223% ( 19) 00:07:24.113 12250.191 - 12300.603: 94.2088% ( 13) 00:07:24.113 12300.603 - 12351.015: 94.3418% ( 20) 00:07:24.113 12351.015 - 12401.428: 94.4481% ( 16) 00:07:24.113 12401.428 - 12451.840: 94.5612% ( 17) 00:07:24.113 12451.840 - 12502.252: 94.6941% ( 20) 00:07:24.113 12502.252 - 12552.665: 94.8338% ( 21) 00:07:24.113 12552.665 - 12603.077: 94.9402% ( 16) 00:07:24.113 12603.077 - 12653.489: 95.0332% ( 14) 00:07:24.113 12653.489 - 12703.902: 95.1330% ( 15) 00:07:24.113 12703.902 - 12754.314: 95.2128% ( 12) 00:07:24.113 12754.314 - 12804.726: 95.3524% ( 21) 00:07:24.113 12804.726 - 12855.138: 95.5186% ( 25) 00:07:24.113 12855.138 - 12905.551: 95.6782% ( 24) 00:07:24.113 12905.551 - 13006.375: 96.0106% ( 50) 00:07:24.113 13006.375 - 13107.200: 96.3032% ( 44) 00:07:24.113 13107.200 - 13208.025: 96.5359% ( 35) 00:07:24.113 13208.025 - 13308.849: 96.7487% ( 32) 00:07:24.113 13308.849 - 13409.674: 96.9614% ( 32) 00:07:24.113 13409.674 - 13510.498: 97.1875% ( 34) 00:07:24.113 13510.498 - 13611.323: 97.3604% ( 26) 00:07:24.113 13611.323 - 13712.148: 97.5332% ( 26) 00:07:24.113 13712.148 - 13812.972: 97.6729% ( 21) 00:07:24.113 13812.972 - 13913.797: 97.8191% ( 22) 00:07:24.113 13913.797 - 14014.622: 97.9920% ( 26) 00:07:24.113 14014.622 - 14115.446: 98.1516% ( 24) 00:07:24.113 14115.446 - 14216.271: 98.2846% ( 20) 00:07:24.113 14216.271 - 14317.095: 98.4309% ( 22) 00:07:24.113 14317.095 - 14417.920: 98.5771% ( 22) 00:07:24.113 14417.920 - 14518.745: 98.7234% ( 22) 00:07:24.113 14518.745 - 14619.569: 98.8431% ( 18) 00:07:24.113 14619.569 - 14720.394: 98.9694% ( 19) 00:07:24.113 14720.394 - 14821.218: 99.0824% ( 17) 00:07:24.113 14821.218 - 14922.043: 99.1822% ( 15) 00:07:24.113 14922.043 - 15022.868: 99.2686% ( 13) 00:07:24.113 15022.868 - 15123.692: 99.3484% ( 12) 00:07:24.113 15123.692 - 15224.517: 99.3816% ( 5) 00:07:24.113 15224.517 - 15325.342: 99.3949% ( 2) 00:07:24.113 15325.342 - 15426.166: 99.4348% ( 6) 00:07:24.113 15426.166 - 15526.991: 99.4814% ( 7) 00:07:24.113 15526.991 - 15627.815: 99.5213% ( 6) 00:07:24.113 15627.815 - 15728.640: 99.5678% ( 7) 00:07:24.113 15728.640 - 15829.465: 99.5745% ( 1) 00:07:24.113 18350.080 - 18450.905: 99.6011% ( 4) 00:07:24.113 18450.905 - 18551.729: 99.6210% ( 3) 00:07:24.114 18551.729 - 18652.554: 99.6410% ( 3) 00:07:24.114 18652.554 - 18753.378: 99.6676% ( 4) 00:07:24.114 18753.378 - 18854.203: 99.6941% ( 4) 00:07:24.114 18854.203 - 18955.028: 99.7141% ( 3) 00:07:24.114 18955.028 - 19055.852: 99.7407% ( 4) 00:07:24.114 19055.852 - 19156.677: 99.7606% ( 3) 00:07:24.114 19156.677 - 19257.502: 99.7872% ( 4) 00:07:24.114 19257.502 - 19358.326: 99.8138% ( 4) 00:07:24.114 19358.326 - 19459.151: 99.8338% ( 3) 00:07:24.114 19459.151 - 19559.975: 99.8604% ( 4) 00:07:24.114 19559.975 - 19660.800: 99.8803% ( 3) 00:07:24.114 19660.800 - 19761.625: 99.9069% ( 4) 00:07:24.114 19761.625 - 19862.449: 99.9335% ( 4) 00:07:24.114 19862.449 - 19963.274: 99.9535% ( 3) 00:07:24.114 19963.274 - 20064.098: 99.9801% ( 4) 00:07:24.114 20064.098 - 20164.923: 100.0000% ( 3) 00:07:24.114 00:07:24.114 22:48:03 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:25.500 Initializing NVMe Controllers 00:07:25.500 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:25.500 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:25.500 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:25.500 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:25.500 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:25.500 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:25.500 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:25.500 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:25.500 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:25.500 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:25.500 Initialization complete. Launching workers. 00:07:25.500 ======================================================== 00:07:25.500 Latency(us) 00:07:25.500 Device Information : IOPS MiB/s Average min max 00:07:25.500 PCIE (0000:00:13.0) NSID 1 from core 0: 13852.33 162.33 9254.51 6295.73 38314.72 00:07:25.500 PCIE (0000:00:10.0) NSID 1 from core 0: 13852.33 162.33 9237.91 6165.64 36989.15 00:07:25.500 PCIE (0000:00:11.0) NSID 1 from core 0: 13852.33 162.33 9220.63 6386.02 34803.54 00:07:25.500 PCIE (0000:00:12.0) NSID 1 from core 0: 13852.33 162.33 9204.18 6253.76 33887.01 00:07:25.500 PCIE (0000:00:12.0) NSID 2 from core 0: 13852.33 162.33 9187.65 6027.64 32281.06 00:07:25.500 PCIE (0000:00:12.0) NSID 3 from core 0: 13916.17 163.08 9129.21 6026.18 24962.43 00:07:25.500 ======================================================== 00:07:25.500 Total : 83177.82 974.74 9205.62 6026.18 38314.72 00:07:25.500 00:07:25.500 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:25.500 ================================================================================= 00:07:25.500 1.00000% : 6654.425us 00:07:25.500 10.00000% : 7612.258us 00:07:25.500 25.00000% : 8368.443us 00:07:25.500 50.00000% : 8872.566us 00:07:25.500 75.00000% : 9376.689us 00:07:25.500 90.00000% : 10737.822us 00:07:25.500 95.00000% : 11746.068us 00:07:25.500 98.00000% : 14417.920us 00:07:25.500 99.00000% : 15224.517us 00:07:25.500 99.50000% : 32465.526us 00:07:25.500 99.90000% : 38111.705us 00:07:25.500 99.99000% : 38313.354us 00:07:25.500 99.99900% : 38515.003us 00:07:25.500 99.99990% : 38515.003us 00:07:25.500 99.99999% : 38515.003us 00:07:25.500 00:07:25.500 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:25.500 ================================================================================= 00:07:25.500 1.00000% : 6704.837us 00:07:25.500 10.00000% : 7662.671us 00:07:25.500 25.00000% : 8318.031us 00:07:25.500 50.00000% : 8822.154us 00:07:25.500 75.00000% : 9477.514us 00:07:25.500 90.00000% : 10737.822us 00:07:25.500 95.00000% : 12048.542us 00:07:25.500 98.00000% : 14417.920us 00:07:25.500 99.00000% : 15426.166us 00:07:25.500 99.50000% : 30247.385us 00:07:25.500 99.90000% : 36700.160us 00:07:25.500 99.99000% : 37103.458us 00:07:25.500 99.99900% : 37103.458us 00:07:25.500 99.99990% : 37103.458us 00:07:25.500 99.99999% : 37103.458us 00:07:25.500 00:07:25.500 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:25.500 ================================================================================= 00:07:25.500 1.00000% : 6704.837us 00:07:25.500 10.00000% : 7713.083us 00:07:25.500 25.00000% : 8368.443us 00:07:25.500 50.00000% : 8822.154us 00:07:25.500 75.00000% : 9427.102us 00:07:25.500 90.00000% : 10636.997us 00:07:25.500 95.00000% : 12250.191us 00:07:25.500 98.00000% : 14216.271us 00:07:25.500 99.00000% : 15627.815us 00:07:25.500 99.50000% : 28432.542us 00:07:25.500 99.90000% : 34482.018us 00:07:25.500 99.99000% : 34885.317us 00:07:25.500 99.99900% : 34885.317us 00:07:25.500 99.99990% : 34885.317us 00:07:25.500 99.99999% : 34885.317us 00:07:25.500 00:07:25.500 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:25.500 ================================================================================= 00:07:25.500 1.00000% : 6654.425us 00:07:25.500 10.00000% : 7662.671us 00:07:25.500 25.00000% : 8368.443us 00:07:25.500 50.00000% : 8822.154us 00:07:25.500 75.00000% : 9376.689us 00:07:25.500 90.00000% : 10636.997us 00:07:25.500 95.00000% : 12300.603us 00:07:25.500 98.00000% : 14216.271us 00:07:25.500 99.00000% : 15930.289us 00:07:25.500 99.50000% : 27222.646us 00:07:25.500 99.90000% : 33675.422us 00:07:25.500 99.99000% : 33877.071us 00:07:25.500 99.99900% : 34078.720us 00:07:25.500 99.99990% : 34078.720us 00:07:25.500 99.99999% : 34078.720us 00:07:25.500 00:07:25.500 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:25.500 ================================================================================= 00:07:25.500 1.00000% : 6654.425us 00:07:25.500 10.00000% : 7662.671us 00:07:25.500 25.00000% : 8368.443us 00:07:25.500 50.00000% : 8872.566us 00:07:25.500 75.00000% : 9376.689us 00:07:25.500 90.00000% : 10687.409us 00:07:25.500 95.00000% : 11998.129us 00:07:25.500 98.00000% : 13712.148us 00:07:25.500 99.00000% : 16232.763us 00:07:25.500 99.50000% : 25306.978us 00:07:25.500 99.90000% : 32062.228us 00:07:25.500 99.99000% : 32263.877us 00:07:25.500 99.99900% : 32465.526us 00:07:25.500 99.99990% : 32465.526us 00:07:25.500 99.99999% : 32465.526us 00:07:25.500 00:07:25.500 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:25.501 ================================================================================= 00:07:25.501 1.00000% : 6654.425us 00:07:25.501 10.00000% : 7612.258us 00:07:25.501 25.00000% : 8368.443us 00:07:25.501 50.00000% : 8872.566us 00:07:25.501 75.00000% : 9427.102us 00:07:25.501 90.00000% : 10737.822us 00:07:25.501 95.00000% : 11746.068us 00:07:25.501 98.00000% : 13913.797us 00:07:25.501 99.00000% : 15829.465us 00:07:25.501 99.50000% : 18148.431us 00:07:25.501 99.90000% : 24702.031us 00:07:25.501 99.99000% : 25004.505us 00:07:25.501 99.99900% : 25004.505us 00:07:25.501 99.99990% : 25004.505us 00:07:25.501 99.99999% : 25004.505us 00:07:25.501 00:07:25.501 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:25.501 ============================================================================== 00:07:25.501 Range in us Cumulative IO count 00:07:25.501 6276.332 - 6301.538: 0.0072% ( 1) 00:07:25.501 6301.538 - 6326.745: 0.0144% ( 1) 00:07:25.501 6326.745 - 6351.951: 0.0216% ( 1) 00:07:25.501 6377.157 - 6402.363: 0.0648% ( 6) 00:07:25.501 6402.363 - 6427.569: 0.0936% ( 4) 00:07:25.501 6427.569 - 6452.775: 0.1224% ( 4) 00:07:25.501 6452.775 - 6503.188: 0.2520% ( 18) 00:07:25.501 6503.188 - 6553.600: 0.4896% ( 33) 00:07:25.501 6553.600 - 6604.012: 0.8209% ( 46) 00:07:25.501 6604.012 - 6654.425: 1.0657% ( 34) 00:07:25.501 6654.425 - 6704.837: 1.4545% ( 54) 00:07:25.501 6704.837 - 6755.249: 1.6993% ( 34) 00:07:25.501 6755.249 - 6805.662: 1.9153% ( 30) 00:07:25.501 6805.662 - 6856.074: 2.2033% ( 40) 00:07:25.501 6856.074 - 6906.486: 2.4986% ( 41) 00:07:25.501 6906.486 - 6956.898: 2.7866% ( 40) 00:07:25.501 6956.898 - 7007.311: 3.1898% ( 56) 00:07:25.501 7007.311 - 7057.723: 3.8522% ( 92) 00:07:25.501 7057.723 - 7108.135: 4.7091% ( 119) 00:07:25.501 7108.135 - 7158.548: 5.4291% ( 100) 00:07:25.501 7158.548 - 7208.960: 6.0628% ( 88) 00:07:25.501 7208.960 - 7259.372: 6.5236% ( 64) 00:07:25.501 7259.372 - 7309.785: 6.9988% ( 66) 00:07:25.501 7309.785 - 7360.197: 7.9781% ( 136) 00:07:25.501 7360.197 - 7410.609: 8.6838% ( 98) 00:07:25.501 7410.609 - 7461.022: 9.3246% ( 89) 00:07:25.501 7461.022 - 7511.434: 9.6054% ( 39) 00:07:25.501 7511.434 - 7561.846: 9.8358% ( 32) 00:07:25.501 7561.846 - 7612.258: 10.0302% ( 27) 00:07:25.501 7612.258 - 7662.671: 10.2535% ( 31) 00:07:25.501 7662.671 - 7713.083: 10.5559% ( 42) 00:07:25.501 7713.083 - 7763.495: 10.8871% ( 46) 00:07:25.501 7763.495 - 7813.908: 11.3767% ( 68) 00:07:25.501 7813.908 - 7864.320: 12.0176% ( 89) 00:07:25.501 7864.320 - 7914.732: 12.7664% ( 104) 00:07:25.501 7914.732 - 7965.145: 13.7529% ( 137) 00:07:25.501 7965.145 - 8015.557: 14.8834% ( 157) 00:07:25.501 8015.557 - 8065.969: 16.2010% ( 183) 00:07:25.501 8065.969 - 8116.382: 17.7635% ( 217) 00:07:25.501 8116.382 - 8166.794: 19.2972% ( 213) 00:07:25.501 8166.794 - 8217.206: 20.6869% ( 193) 00:07:25.501 8217.206 - 8267.618: 22.3430% ( 230) 00:07:25.501 8267.618 - 8318.031: 24.2872% ( 270) 00:07:25.501 8318.031 - 8368.443: 25.7921% ( 209) 00:07:25.501 8368.443 - 8418.855: 27.7218% ( 268) 00:07:25.501 8418.855 - 8469.268: 29.8675% ( 298) 00:07:25.501 8469.268 - 8519.680: 32.1141% ( 312) 00:07:25.501 8519.680 - 8570.092: 34.3534% ( 311) 00:07:25.501 8570.092 - 8620.505: 37.2120% ( 397) 00:07:25.501 8620.505 - 8670.917: 40.1354% ( 406) 00:07:25.501 8670.917 - 8721.329: 42.7491% ( 363) 00:07:25.501 8721.329 - 8771.742: 45.7877% ( 422) 00:07:25.501 8771.742 - 8822.154: 48.8335% ( 423) 00:07:25.501 8822.154 - 8872.566: 51.9225% ( 429) 00:07:25.501 8872.566 - 8922.978: 55.4003% ( 483) 00:07:25.501 8922.978 - 8973.391: 58.9502% ( 493) 00:07:25.501 8973.391 - 9023.803: 62.3920% ( 478) 00:07:25.501 9023.803 - 9074.215: 65.3154% ( 406) 00:07:25.501 9074.215 - 9124.628: 67.6267% ( 321) 00:07:25.501 9124.628 - 9175.040: 69.5781% ( 271) 00:07:25.501 9175.040 - 9225.452: 71.3998% ( 253) 00:07:25.501 9225.452 - 9275.865: 73.0703% ( 232) 00:07:25.501 9275.865 - 9326.277: 74.2224% ( 160) 00:07:25.501 9326.277 - 9376.689: 75.1368% ( 127) 00:07:25.501 9376.689 - 9427.102: 75.9145% ( 108) 00:07:25.501 9427.102 - 9477.514: 76.6921% ( 108) 00:07:25.501 9477.514 - 9527.926: 77.3978% ( 98) 00:07:25.501 9527.926 - 9578.338: 78.0746% ( 94) 00:07:25.501 9578.338 - 9628.751: 78.6794% ( 84) 00:07:25.501 9628.751 - 9679.163: 79.2339% ( 77) 00:07:25.501 9679.163 - 9729.575: 79.8315% ( 83) 00:07:25.501 9729.575 - 9779.988: 80.4363% ( 84) 00:07:25.501 9779.988 - 9830.400: 80.9260% ( 68) 00:07:25.501 9830.400 - 9880.812: 81.3940% ( 65) 00:07:25.501 9880.812 - 9931.225: 81.7900% ( 55) 00:07:25.501 9931.225 - 9981.637: 82.4237% ( 88) 00:07:25.501 9981.637 - 10032.049: 83.1653% ( 103) 00:07:25.501 10032.049 - 10082.462: 83.7774% ( 85) 00:07:25.501 10082.462 - 10132.874: 84.4110% ( 88) 00:07:25.501 10132.874 - 10183.286: 85.4479% ( 144) 00:07:25.501 10183.286 - 10233.698: 86.0167% ( 79) 00:07:25.501 10233.698 - 10284.111: 86.5999% ( 81) 00:07:25.501 10284.111 - 10334.523: 87.2048% ( 84) 00:07:25.501 10334.523 - 10384.935: 87.7520% ( 76) 00:07:25.501 10384.935 - 10435.348: 88.1552% ( 56) 00:07:25.501 10435.348 - 10485.760: 88.5657% ( 57) 00:07:25.501 10485.760 - 10536.172: 88.9905% ( 59) 00:07:25.501 10536.172 - 10586.585: 89.2857% ( 41) 00:07:25.501 10586.585 - 10636.997: 89.5449% ( 36) 00:07:25.501 10636.997 - 10687.409: 89.8546% ( 43) 00:07:25.501 10687.409 - 10737.822: 90.2074% ( 49) 00:07:25.501 10737.822 - 10788.234: 90.3874% ( 25) 00:07:25.501 10788.234 - 10838.646: 90.6034% ( 30) 00:07:25.501 10838.646 - 10889.058: 90.8266% ( 31) 00:07:25.501 10889.058 - 10939.471: 91.1146% ( 40) 00:07:25.501 10939.471 - 10989.883: 91.4243% ( 43) 00:07:25.501 10989.883 - 11040.295: 91.8419% ( 58) 00:07:25.501 11040.295 - 11090.708: 92.0435% ( 28) 00:07:25.501 11090.708 - 11141.120: 92.2595% ( 30) 00:07:25.501 11141.120 - 11191.532: 92.4899% ( 32) 00:07:25.501 11191.532 - 11241.945: 92.6915% ( 28) 00:07:25.501 11241.945 - 11292.357: 92.8859% ( 27) 00:07:25.501 11292.357 - 11342.769: 93.0300% ( 20) 00:07:25.501 11342.769 - 11393.182: 93.2316% ( 28) 00:07:25.501 11393.182 - 11443.594: 93.4260% ( 27) 00:07:25.501 11443.594 - 11494.006: 93.6276% ( 28) 00:07:25.501 11494.006 - 11544.418: 93.9804% ( 49) 00:07:25.501 11544.418 - 11594.831: 94.2972% ( 44) 00:07:25.501 11594.831 - 11645.243: 94.6429% ( 48) 00:07:25.501 11645.243 - 11695.655: 94.9237% ( 39) 00:07:25.501 11695.655 - 11746.068: 95.1685% ( 34) 00:07:25.501 11746.068 - 11796.480: 95.2981% ( 18) 00:07:25.501 11796.480 - 11846.892: 95.3845% ( 12) 00:07:25.501 11846.892 - 11897.305: 95.4637% ( 11) 00:07:25.501 11897.305 - 11947.717: 95.5213% ( 8) 00:07:25.501 11947.717 - 11998.129: 95.5933% ( 10) 00:07:25.501 11998.129 - 12048.542: 95.6725% ( 11) 00:07:25.501 12048.542 - 12098.954: 95.7661% ( 13) 00:07:25.501 12098.954 - 12149.366: 95.9101% ( 20) 00:07:25.501 12149.366 - 12199.778: 96.0901% ( 25) 00:07:25.501 12199.778 - 12250.191: 96.1910% ( 14) 00:07:25.501 12250.191 - 12300.603: 96.2630% ( 10) 00:07:25.501 12300.603 - 12351.015: 96.3206% ( 8) 00:07:25.501 12351.015 - 12401.428: 96.3782% ( 8) 00:07:25.501 12401.428 - 12451.840: 96.4430% ( 9) 00:07:25.501 12451.840 - 12502.252: 96.5150% ( 10) 00:07:25.501 12502.252 - 12552.665: 96.5582% ( 6) 00:07:25.501 12552.665 - 12603.077: 96.5942% ( 5) 00:07:25.501 12603.077 - 12653.489: 96.6230% ( 4) 00:07:25.501 12653.489 - 12703.902: 96.6518% ( 4) 00:07:25.501 12703.902 - 12754.314: 96.6806% ( 4) 00:07:25.501 12754.314 - 12804.726: 96.7166% ( 5) 00:07:25.501 12804.726 - 12855.138: 96.7310% ( 2) 00:07:25.501 12855.138 - 12905.551: 96.7526% ( 3) 00:07:25.501 12905.551 - 13006.375: 96.8390% ( 12) 00:07:25.501 13006.375 - 13107.200: 96.8750% ( 5) 00:07:25.501 13107.200 - 13208.025: 96.9254% ( 7) 00:07:25.501 13208.025 - 13308.849: 96.9974% ( 10) 00:07:25.501 13308.849 - 13409.674: 97.0766% ( 11) 00:07:25.501 13409.674 - 13510.498: 97.1630% ( 12) 00:07:25.501 13510.498 - 13611.323: 97.2422% ( 11) 00:07:25.501 13611.323 - 13712.148: 97.3286% ( 12) 00:07:25.501 13712.148 - 13812.972: 97.4582% ( 18) 00:07:25.501 13812.972 - 13913.797: 97.5518% ( 13) 00:07:25.501 13913.797 - 14014.622: 97.6671% ( 16) 00:07:25.501 14014.622 - 14115.446: 97.7535% ( 12) 00:07:25.501 14115.446 - 14216.271: 97.8687% ( 16) 00:07:25.501 14216.271 - 14317.095: 97.9551% ( 12) 00:07:25.501 14317.095 - 14417.920: 98.0775% ( 17) 00:07:25.501 14417.920 - 14518.745: 98.2647% ( 26) 00:07:25.501 14518.745 - 14619.569: 98.4591% ( 27) 00:07:25.501 14619.569 - 14720.394: 98.6535% ( 27) 00:07:25.501 14720.394 - 14821.218: 98.7471% ( 13) 00:07:25.501 14821.218 - 14922.043: 98.8263% ( 11) 00:07:25.501 14922.043 - 15022.868: 98.8911% ( 9) 00:07:25.501 15022.868 - 15123.692: 98.9631% ( 10) 00:07:25.501 15123.692 - 15224.517: 99.0063% ( 6) 00:07:25.501 15224.517 - 15325.342: 99.0279% ( 3) 00:07:25.501 15325.342 - 15426.166: 99.0423% ( 2) 00:07:25.501 15426.166 - 15526.991: 99.0639% ( 3) 00:07:25.501 15526.991 - 15627.815: 99.0783% ( 2) 00:07:25.501 30852.332 - 31053.982: 99.1719% ( 13) 00:07:25.501 31053.982 - 31255.631: 99.2512% ( 11) 00:07:25.501 31255.631 - 31457.280: 99.2872% ( 5) 00:07:25.501 31457.280 - 31658.929: 99.3304% ( 6) 00:07:25.501 31658.929 - 31860.578: 99.3664% ( 5) 00:07:25.501 31860.578 - 32062.228: 99.4024% ( 5) 00:07:25.501 32062.228 - 32263.877: 99.4528% ( 7) 00:07:25.501 32263.877 - 32465.526: 99.5032% ( 7) 00:07:25.501 32465.526 - 32667.175: 99.5392% ( 5) 00:07:25.501 35691.914 - 35893.563: 99.5968% ( 8) 00:07:25.502 35893.563 - 36095.212: 99.6184% ( 3) 00:07:25.502 36498.511 - 36700.160: 99.6616% ( 6) 00:07:25.502 36700.160 - 36901.809: 99.6760% ( 2) 00:07:25.502 36901.809 - 37103.458: 99.7048% ( 4) 00:07:25.502 37103.458 - 37305.108: 99.7408% ( 5) 00:07:25.502 37305.108 - 37506.757: 99.7912% ( 7) 00:07:25.502 37506.757 - 37708.406: 99.8416% ( 7) 00:07:25.502 37708.406 - 37910.055: 99.8992% ( 8) 00:07:25.502 37910.055 - 38111.705: 99.9496% ( 7) 00:07:25.502 38111.705 - 38313.354: 99.9928% ( 6) 00:07:25.502 38313.354 - 38515.003: 100.0000% ( 1) 00:07:25.502 00:07:25.502 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:25.502 ============================================================================== 00:07:25.502 Range in us Cumulative IO count 00:07:25.502 6150.302 - 6175.508: 0.0072% ( 1) 00:07:25.502 6200.714 - 6225.920: 0.0144% ( 1) 00:07:25.502 6225.920 - 6251.126: 0.0216% ( 1) 00:07:25.502 6251.126 - 6276.332: 0.0288% ( 1) 00:07:25.502 6276.332 - 6301.538: 0.0360% ( 1) 00:07:25.502 6301.538 - 6326.745: 0.0432% ( 1) 00:07:25.502 6326.745 - 6351.951: 0.0648% ( 3) 00:07:25.502 6351.951 - 6377.157: 0.1224% ( 8) 00:07:25.502 6377.157 - 6402.363: 0.1584% ( 5) 00:07:25.502 6402.363 - 6427.569: 0.2016% ( 6) 00:07:25.502 6427.569 - 6452.775: 0.2592% ( 8) 00:07:25.502 6452.775 - 6503.188: 0.3672% ( 15) 00:07:25.502 6503.188 - 6553.600: 0.5976% ( 32) 00:07:25.502 6553.600 - 6604.012: 0.7704% ( 24) 00:07:25.502 6604.012 - 6654.425: 0.9289% ( 22) 00:07:25.502 6654.425 - 6704.837: 1.2025% ( 38) 00:07:25.502 6704.837 - 6755.249: 1.5337% ( 46) 00:07:25.502 6755.249 - 6805.662: 1.9009% ( 51) 00:07:25.502 6805.662 - 6856.074: 2.3113% ( 57) 00:07:25.502 6856.074 - 6906.486: 3.0746% ( 106) 00:07:25.502 6906.486 - 6956.898: 3.6578% ( 81) 00:07:25.502 6956.898 - 7007.311: 4.1043% ( 62) 00:07:25.502 7007.311 - 7057.723: 4.5723% ( 65) 00:07:25.502 7057.723 - 7108.135: 4.9755% ( 56) 00:07:25.502 7108.135 - 7158.548: 5.4075% ( 60) 00:07:25.502 7158.548 - 7208.960: 5.7820% ( 52) 00:07:25.502 7208.960 - 7259.372: 6.3076% ( 73) 00:07:25.502 7259.372 - 7309.785: 6.9484% ( 89) 00:07:25.502 7309.785 - 7360.197: 7.5029% ( 77) 00:07:25.502 7360.197 - 7410.609: 8.0357% ( 74) 00:07:25.502 7410.609 - 7461.022: 8.4245% ( 54) 00:07:25.502 7461.022 - 7511.434: 8.8566% ( 60) 00:07:25.502 7511.434 - 7561.846: 9.3318% ( 66) 00:07:25.502 7561.846 - 7612.258: 9.7422% ( 57) 00:07:25.502 7612.258 - 7662.671: 10.2823% ( 75) 00:07:25.502 7662.671 - 7713.083: 10.9015% ( 86) 00:07:25.502 7713.083 - 7763.495: 11.5711% ( 93) 00:07:25.502 7763.495 - 7813.908: 12.3848% ( 113) 00:07:25.502 7813.908 - 7864.320: 13.4145% ( 143) 00:07:25.502 7864.320 - 7914.732: 14.3577% ( 131) 00:07:25.502 7914.732 - 7965.145: 15.3082% ( 132) 00:07:25.502 7965.145 - 8015.557: 16.4819% ( 163) 00:07:25.502 8015.557 - 8065.969: 17.5763% ( 152) 00:07:25.502 8065.969 - 8116.382: 18.8580% ( 178) 00:07:25.502 8116.382 - 8166.794: 20.8525% ( 277) 00:07:25.502 8166.794 - 8217.206: 22.4510% ( 222) 00:07:25.502 8217.206 - 8267.618: 24.3448% ( 263) 00:07:25.502 8267.618 - 8318.031: 26.6417% ( 319) 00:07:25.502 8318.031 - 8368.443: 28.7946% ( 299) 00:07:25.502 8368.443 - 8418.855: 31.1636% ( 329) 00:07:25.502 8418.855 - 8469.268: 33.3237% ( 300) 00:07:25.502 8469.268 - 8519.680: 35.3471% ( 281) 00:07:25.502 8519.680 - 8570.092: 37.4496% ( 292) 00:07:25.502 8570.092 - 8620.505: 39.9986% ( 354) 00:07:25.502 8620.505 - 8670.917: 42.8283% ( 393) 00:07:25.502 8670.917 - 8721.329: 45.6005% ( 385) 00:07:25.502 8721.329 - 8771.742: 48.4951% ( 402) 00:07:25.502 8771.742 - 8822.154: 50.8353% ( 325) 00:07:25.502 8822.154 - 8872.566: 52.9738% ( 297) 00:07:25.502 8872.566 - 8922.978: 55.2203% ( 312) 00:07:25.502 8922.978 - 8973.391: 57.4021% ( 303) 00:07:25.502 8973.391 - 9023.803: 59.5766% ( 302) 00:07:25.502 9023.803 - 9074.215: 61.9600% ( 331) 00:07:25.502 9074.215 - 9124.628: 64.1345% ( 302) 00:07:25.502 9124.628 - 9175.040: 66.2010% ( 287) 00:07:25.502 9175.040 - 9225.452: 67.8787% ( 233) 00:07:25.502 9225.452 - 9275.865: 69.5349% ( 230) 00:07:25.502 9275.865 - 9326.277: 71.0974% ( 217) 00:07:25.502 9326.277 - 9376.689: 72.4942% ( 194) 00:07:25.502 9376.689 - 9427.102: 73.8695% ( 191) 00:07:25.502 9427.102 - 9477.514: 75.3024% ( 199) 00:07:25.502 9477.514 - 9527.926: 76.5481% ( 173) 00:07:25.502 9527.926 - 9578.338: 77.5130% ( 134) 00:07:25.502 9578.338 - 9628.751: 78.4634% ( 132) 00:07:25.502 9628.751 - 9679.163: 79.3059% ( 117) 00:07:25.502 9679.163 - 9729.575: 80.1771% ( 121) 00:07:25.502 9729.575 - 9779.988: 80.8252% ( 90) 00:07:25.502 9779.988 - 9830.400: 81.4228% ( 83) 00:07:25.502 9830.400 - 9880.812: 82.0349% ( 85) 00:07:25.502 9880.812 - 9931.225: 82.6685% ( 88) 00:07:25.502 9931.225 - 9981.637: 83.3525% ( 95) 00:07:25.502 9981.637 - 10032.049: 83.9286% ( 80) 00:07:25.502 10032.049 - 10082.462: 84.5262% ( 83) 00:07:25.502 10082.462 - 10132.874: 85.0662% ( 75) 00:07:25.502 10132.874 - 10183.286: 85.7143% ( 90) 00:07:25.502 10183.286 - 10233.698: 86.4127% ( 97) 00:07:25.502 10233.698 - 10284.111: 86.9744% ( 78) 00:07:25.502 10284.111 - 10334.523: 87.5504% ( 80) 00:07:25.502 10334.523 - 10384.935: 87.9176% ( 51) 00:07:25.502 10384.935 - 10435.348: 88.1912% ( 38) 00:07:25.502 10435.348 - 10485.760: 88.5297% ( 47) 00:07:25.502 10485.760 - 10536.172: 88.8177% ( 40) 00:07:25.502 10536.172 - 10586.585: 89.1561% ( 47) 00:07:25.502 10586.585 - 10636.997: 89.5017% ( 48) 00:07:25.502 10636.997 - 10687.409: 89.8618% ( 50) 00:07:25.502 10687.409 - 10737.822: 90.1642% ( 42) 00:07:25.502 10737.822 - 10788.234: 90.3946% ( 32) 00:07:25.502 10788.234 - 10838.646: 90.7906% ( 55) 00:07:25.502 10838.646 - 10889.058: 90.9562% ( 23) 00:07:25.502 10889.058 - 10939.471: 91.1578% ( 28) 00:07:25.502 10939.471 - 10989.883: 91.2946% ( 19) 00:07:25.502 10989.883 - 11040.295: 91.4675% ( 24) 00:07:25.502 11040.295 - 11090.708: 91.6835% ( 30) 00:07:25.502 11090.708 - 11141.120: 91.8851% ( 28) 00:07:25.502 11141.120 - 11191.532: 92.0003% ( 16) 00:07:25.502 11191.532 - 11241.945: 92.1875% ( 26) 00:07:25.502 11241.945 - 11292.357: 92.3531% ( 23) 00:07:25.502 11292.357 - 11342.769: 92.6123% ( 36) 00:07:25.502 11342.769 - 11393.182: 92.8427% ( 32) 00:07:25.502 11393.182 - 11443.594: 93.0156% ( 24) 00:07:25.502 11443.594 - 11494.006: 93.2028% ( 26) 00:07:25.502 11494.006 - 11544.418: 93.3108% ( 15) 00:07:25.502 11544.418 - 11594.831: 93.4476% ( 19) 00:07:25.502 11594.831 - 11645.243: 93.6852% ( 33) 00:07:25.502 11645.243 - 11695.655: 93.8004% ( 16) 00:07:25.502 11695.655 - 11746.068: 93.8796% ( 11) 00:07:25.502 11746.068 - 11796.480: 94.0308% ( 21) 00:07:25.502 11796.480 - 11846.892: 94.1676% ( 19) 00:07:25.502 11846.892 - 11897.305: 94.3332% ( 23) 00:07:25.502 11897.305 - 11947.717: 94.5637% ( 32) 00:07:25.502 11947.717 - 11998.129: 94.8517% ( 40) 00:07:25.502 11998.129 - 12048.542: 95.0245% ( 24) 00:07:25.502 12048.542 - 12098.954: 95.1613% ( 19) 00:07:25.502 12098.954 - 12149.366: 95.3701% ( 29) 00:07:25.502 12149.366 - 12199.778: 95.5645% ( 27) 00:07:25.502 12199.778 - 12250.191: 95.7229% ( 22) 00:07:25.502 12250.191 - 12300.603: 95.7877% ( 9) 00:07:25.502 12300.603 - 12351.015: 95.8813% ( 13) 00:07:25.502 12351.015 - 12401.428: 96.0325% ( 21) 00:07:25.502 12401.428 - 12451.840: 96.0901% ( 8) 00:07:25.502 12451.840 - 12502.252: 96.1622% ( 10) 00:07:25.502 12502.252 - 12552.665: 96.2558% ( 13) 00:07:25.502 12552.665 - 12603.077: 96.3062% ( 7) 00:07:25.502 12603.077 - 12653.489: 96.3854% ( 11) 00:07:25.502 12653.489 - 12703.902: 96.4718% ( 12) 00:07:25.502 12703.902 - 12754.314: 96.5438% ( 10) 00:07:25.502 12754.314 - 12804.726: 96.6374% ( 13) 00:07:25.502 12804.726 - 12855.138: 96.7454% ( 15) 00:07:25.502 12855.138 - 12905.551: 96.8246% ( 11) 00:07:25.502 12905.551 - 13006.375: 96.9326% ( 15) 00:07:25.502 13006.375 - 13107.200: 96.9974% ( 9) 00:07:25.502 13107.200 - 13208.025: 97.0694% ( 10) 00:07:25.502 13208.025 - 13308.849: 97.1198% ( 7) 00:07:25.502 13308.849 - 13409.674: 97.1630% ( 6) 00:07:25.502 13409.674 - 13510.498: 97.2350% ( 10) 00:07:25.502 13510.498 - 13611.323: 97.3790% ( 20) 00:07:25.502 13611.323 - 13712.148: 97.4798% ( 14) 00:07:25.502 13712.148 - 13812.972: 97.5446% ( 9) 00:07:25.502 13812.972 - 13913.797: 97.6094% ( 9) 00:07:25.502 13913.797 - 14014.622: 97.6310% ( 3) 00:07:25.502 14014.622 - 14115.446: 97.7031% ( 10) 00:07:25.502 14115.446 - 14216.271: 97.8399% ( 19) 00:07:25.502 14216.271 - 14317.095: 97.9983% ( 22) 00:07:25.502 14317.095 - 14417.920: 98.1279% ( 18) 00:07:25.502 14417.920 - 14518.745: 98.2431% ( 16) 00:07:25.502 14518.745 - 14619.569: 98.3223% ( 11) 00:07:25.502 14619.569 - 14720.394: 98.4087% ( 12) 00:07:25.502 14720.394 - 14821.218: 98.4951% ( 12) 00:07:25.502 14821.218 - 14922.043: 98.6247% ( 18) 00:07:25.502 14922.043 - 15022.868: 98.7111% ( 12) 00:07:25.502 15022.868 - 15123.692: 98.7903% ( 11) 00:07:25.502 15123.692 - 15224.517: 98.8551% ( 9) 00:07:25.502 15224.517 - 15325.342: 98.9631% ( 15) 00:07:25.502 15325.342 - 15426.166: 99.0135% ( 7) 00:07:25.502 15426.166 - 15526.991: 99.0495% ( 5) 00:07:25.502 15526.991 - 15627.815: 99.0711% ( 3) 00:07:25.502 15627.815 - 15728.640: 99.0783% ( 1) 00:07:25.502 28230.892 - 28432.542: 99.0855% ( 1) 00:07:25.502 28432.542 - 28634.191: 99.1431% ( 8) 00:07:25.502 28634.191 - 28835.840: 99.1791% ( 5) 00:07:25.502 28835.840 - 29037.489: 99.2296% ( 7) 00:07:25.502 29037.489 - 29239.138: 99.2800% ( 7) 00:07:25.502 29239.138 - 29440.788: 99.3304% ( 7) 00:07:25.502 29440.788 - 29642.437: 99.3736% ( 6) 00:07:25.502 29642.437 - 29844.086: 99.4240% ( 7) 00:07:25.502 29844.086 - 30045.735: 99.4744% ( 7) 00:07:25.503 30045.735 - 30247.385: 99.5176% ( 6) 00:07:25.503 30247.385 - 30449.034: 99.5392% ( 3) 00:07:25.503 34885.317 - 35086.966: 99.5536% ( 2) 00:07:25.503 35086.966 - 35288.615: 99.5968% ( 6) 00:07:25.503 35288.615 - 35490.265: 99.6400% ( 6) 00:07:25.503 35490.265 - 35691.914: 99.6760% ( 5) 00:07:25.503 35691.914 - 35893.563: 99.7336% ( 8) 00:07:25.503 35893.563 - 36095.212: 99.7840% ( 7) 00:07:25.503 36095.212 - 36296.862: 99.8272% ( 6) 00:07:25.503 36296.862 - 36498.511: 99.8776% ( 7) 00:07:25.503 36498.511 - 36700.160: 99.9280% ( 7) 00:07:25.503 36700.160 - 36901.809: 99.9712% ( 6) 00:07:25.503 36901.809 - 37103.458: 100.0000% ( 4) 00:07:25.503 00:07:25.503 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:25.503 ============================================================================== 00:07:25.503 Range in us Cumulative IO count 00:07:25.503 6377.157 - 6402.363: 0.0144% ( 2) 00:07:25.503 6402.363 - 6427.569: 0.0216% ( 1) 00:07:25.503 6427.569 - 6452.775: 0.0360% ( 2) 00:07:25.503 6452.775 - 6503.188: 0.1440% ( 15) 00:07:25.503 6503.188 - 6553.600: 0.4896% ( 48) 00:07:25.503 6553.600 - 6604.012: 0.6048% ( 16) 00:07:25.503 6604.012 - 6654.425: 0.7632% ( 22) 00:07:25.503 6654.425 - 6704.837: 1.1593% ( 55) 00:07:25.503 6704.837 - 6755.249: 1.2889% ( 18) 00:07:25.503 6755.249 - 6805.662: 1.3681% ( 11) 00:07:25.503 6805.662 - 6856.074: 1.5337% ( 23) 00:07:25.503 6856.074 - 6906.486: 2.2825% ( 104) 00:07:25.503 6906.486 - 6956.898: 2.6498% ( 51) 00:07:25.503 6956.898 - 7007.311: 3.0674% ( 58) 00:07:25.503 7007.311 - 7057.723: 3.8594% ( 110) 00:07:25.503 7057.723 - 7108.135: 4.6875% ( 115) 00:07:25.503 7108.135 - 7158.548: 5.3211% ( 88) 00:07:25.503 7158.548 - 7208.960: 5.6812% ( 50) 00:07:25.503 7208.960 - 7259.372: 6.1204% ( 61) 00:07:25.503 7259.372 - 7309.785: 6.5020% ( 53) 00:07:25.503 7309.785 - 7360.197: 6.9196% ( 58) 00:07:25.503 7360.197 - 7410.609: 7.3517% ( 60) 00:07:25.503 7410.609 - 7461.022: 7.7261% ( 52) 00:07:25.503 7461.022 - 7511.434: 8.1221% ( 55) 00:07:25.503 7511.434 - 7561.846: 8.8062% ( 95) 00:07:25.503 7561.846 - 7612.258: 9.5262% ( 100) 00:07:25.503 7612.258 - 7662.671: 9.9870% ( 64) 00:07:25.503 7662.671 - 7713.083: 10.4119% ( 59) 00:07:25.503 7713.083 - 7763.495: 11.0527% ( 89) 00:07:25.503 7763.495 - 7813.908: 11.5711% ( 72) 00:07:25.503 7813.908 - 7864.320: 12.1976% ( 87) 00:07:25.503 7864.320 - 7914.732: 12.9680% ( 107) 00:07:25.503 7914.732 - 7965.145: 13.9689% ( 139) 00:07:25.503 7965.145 - 8015.557: 15.1858% ( 169) 00:07:25.503 8015.557 - 8065.969: 16.2730% ( 151) 00:07:25.503 8065.969 - 8116.382: 17.5043% ( 171) 00:07:25.503 8116.382 - 8166.794: 19.0164% ( 210) 00:07:25.503 8166.794 - 8217.206: 20.6581% ( 228) 00:07:25.503 8217.206 - 8267.618: 22.5158% ( 258) 00:07:25.503 8267.618 - 8318.031: 24.3592% ( 256) 00:07:25.503 8318.031 - 8368.443: 26.2817% ( 267) 00:07:25.503 8368.443 - 8418.855: 28.6146% ( 324) 00:07:25.503 8418.855 - 8469.268: 30.6668% ( 285) 00:07:25.503 8469.268 - 8519.680: 32.8989% ( 310) 00:07:25.503 8519.680 - 8570.092: 35.2679% ( 329) 00:07:25.503 8570.092 - 8620.505: 38.1264% ( 397) 00:07:25.503 8620.505 - 8670.917: 41.2082% ( 428) 00:07:25.503 8670.917 - 8721.329: 44.3188% ( 432) 00:07:25.503 8721.329 - 8771.742: 47.1774% ( 397) 00:07:25.503 8771.742 - 8822.154: 50.4464% ( 454) 00:07:25.503 8822.154 - 8872.566: 53.5858% ( 436) 00:07:25.503 8872.566 - 8922.978: 56.2788% ( 374) 00:07:25.503 8922.978 - 8973.391: 58.7486% ( 343) 00:07:25.503 8973.391 - 9023.803: 61.2183% ( 343) 00:07:25.503 9023.803 - 9074.215: 63.7025% ( 345) 00:07:25.503 9074.215 - 9124.628: 66.2586% ( 355) 00:07:25.503 9124.628 - 9175.040: 68.5268% ( 315) 00:07:25.503 9175.040 - 9225.452: 70.2189% ( 235) 00:07:25.503 9225.452 - 9275.865: 71.9686% ( 243) 00:07:25.503 9275.865 - 9326.277: 73.4231% ( 202) 00:07:25.503 9326.277 - 9376.689: 74.8848% ( 203) 00:07:25.503 9376.689 - 9427.102: 76.2241% ( 186) 00:07:25.503 9427.102 - 9477.514: 77.2249% ( 139) 00:07:25.503 9477.514 - 9527.926: 78.1826% ( 133) 00:07:25.503 9527.926 - 9578.338: 79.0539% ( 121) 00:07:25.503 9578.338 - 9628.751: 79.8099% ( 105) 00:07:25.503 9628.751 - 9679.163: 80.4291% ( 86) 00:07:25.503 9679.163 - 9729.575: 81.0772% ( 90) 00:07:25.503 9729.575 - 9779.988: 81.6892% ( 85) 00:07:25.503 9779.988 - 9830.400: 82.3157% ( 87) 00:07:25.503 9830.400 - 9880.812: 82.9061% ( 82) 00:07:25.503 9880.812 - 9931.225: 83.3669% ( 64) 00:07:25.503 9931.225 - 9981.637: 83.8134% ( 62) 00:07:25.503 9981.637 - 10032.049: 84.2814% ( 65) 00:07:25.503 10032.049 - 10082.462: 84.7566% ( 66) 00:07:25.503 10082.462 - 10132.874: 85.2535% ( 69) 00:07:25.503 10132.874 - 10183.286: 85.7647% ( 71) 00:07:25.503 10183.286 - 10233.698: 86.2687% ( 70) 00:07:25.503 10233.698 - 10284.111: 86.7224% ( 63) 00:07:25.503 10284.111 - 10334.523: 87.2984% ( 80) 00:07:25.503 10334.523 - 10384.935: 88.0184% ( 100) 00:07:25.503 10384.935 - 10435.348: 88.4721% ( 63) 00:07:25.503 10435.348 - 10485.760: 89.0049% ( 74) 00:07:25.503 10485.760 - 10536.172: 89.4441% ( 61) 00:07:25.503 10536.172 - 10586.585: 89.8762% ( 60) 00:07:25.503 10586.585 - 10636.997: 90.2506% ( 52) 00:07:25.503 10636.997 - 10687.409: 90.5098% ( 36) 00:07:25.503 10687.409 - 10737.822: 90.7330% ( 31) 00:07:25.503 10737.822 - 10788.234: 91.0066% ( 38) 00:07:25.503 10788.234 - 10838.646: 91.3522% ( 48) 00:07:25.503 10838.646 - 10889.058: 91.5467% ( 27) 00:07:25.503 10889.058 - 10939.471: 91.6907% ( 20) 00:07:25.503 10939.471 - 10989.883: 91.8707% ( 25) 00:07:25.503 10989.883 - 11040.295: 92.0723% ( 28) 00:07:25.503 11040.295 - 11090.708: 92.2811% ( 29) 00:07:25.503 11090.708 - 11141.120: 92.4611% ( 25) 00:07:25.503 11141.120 - 11191.532: 92.6267% ( 23) 00:07:25.503 11191.532 - 11241.945: 92.7923% ( 23) 00:07:25.503 11241.945 - 11292.357: 92.9363% ( 20) 00:07:25.503 11292.357 - 11342.769: 93.1092% ( 24) 00:07:25.503 11342.769 - 11393.182: 93.2460% ( 19) 00:07:25.503 11393.182 - 11443.594: 93.3828% ( 19) 00:07:25.503 11443.594 - 11494.006: 93.5268% ( 20) 00:07:25.503 11494.006 - 11544.418: 93.6996% ( 24) 00:07:25.503 11544.418 - 11594.831: 93.8076% ( 15) 00:07:25.503 11594.831 - 11645.243: 93.9228% ( 16) 00:07:25.503 11645.243 - 11695.655: 94.0380% ( 16) 00:07:25.503 11695.655 - 11746.068: 94.1748% ( 19) 00:07:25.503 11746.068 - 11796.480: 94.2684% ( 13) 00:07:25.503 11796.480 - 11846.892: 94.3548% ( 12) 00:07:25.503 11846.892 - 11897.305: 94.4268% ( 10) 00:07:25.503 11897.305 - 11947.717: 94.4916% ( 9) 00:07:25.503 11947.717 - 11998.129: 94.5493% ( 8) 00:07:25.503 11998.129 - 12048.542: 94.6213% ( 10) 00:07:25.503 12048.542 - 12098.954: 94.7005% ( 11) 00:07:25.503 12098.954 - 12149.366: 94.8085% ( 15) 00:07:25.503 12149.366 - 12199.778: 94.9093% ( 14) 00:07:25.503 12199.778 - 12250.191: 95.0029% ( 13) 00:07:25.503 12250.191 - 12300.603: 95.0749% ( 10) 00:07:25.503 12300.603 - 12351.015: 95.2045% ( 18) 00:07:25.503 12351.015 - 12401.428: 95.3125% ( 15) 00:07:25.503 12401.428 - 12451.840: 95.4133% ( 14) 00:07:25.503 12451.840 - 12502.252: 95.4925% ( 11) 00:07:25.503 12502.252 - 12552.665: 95.5861% ( 13) 00:07:25.503 12552.665 - 12603.077: 95.7085% ( 17) 00:07:25.503 12603.077 - 12653.489: 95.8741% ( 23) 00:07:25.503 12653.489 - 12703.902: 96.0253% ( 21) 00:07:25.503 12703.902 - 12754.314: 96.3710% ( 48) 00:07:25.503 12754.314 - 12804.726: 96.5366% ( 23) 00:07:25.503 12804.726 - 12855.138: 96.7022% ( 23) 00:07:25.503 12855.138 - 12905.551: 96.8174% ( 16) 00:07:25.503 12905.551 - 13006.375: 97.0046% ( 26) 00:07:25.503 13006.375 - 13107.200: 97.2206% ( 30) 00:07:25.503 13107.200 - 13208.025: 97.3430% ( 17) 00:07:25.503 13208.025 - 13308.849: 97.4294% ( 12) 00:07:25.503 13308.849 - 13409.674: 97.5158% ( 12) 00:07:25.503 13409.674 - 13510.498: 97.5446% ( 4) 00:07:25.503 13510.498 - 13611.323: 97.5950% ( 7) 00:07:25.503 13611.323 - 13712.148: 97.6743% ( 11) 00:07:25.503 13712.148 - 13812.972: 97.7463% ( 10) 00:07:25.503 13812.972 - 13913.797: 97.8183% ( 10) 00:07:25.503 13913.797 - 14014.622: 97.8831% ( 9) 00:07:25.503 14014.622 - 14115.446: 97.9551% ( 10) 00:07:25.503 14115.446 - 14216.271: 98.0271% ( 10) 00:07:25.503 14216.271 - 14317.095: 98.1567% ( 18) 00:07:25.503 14317.095 - 14417.920: 98.3727% ( 30) 00:07:25.503 14417.920 - 14518.745: 98.4591% ( 12) 00:07:25.503 14518.745 - 14619.569: 98.5239% ( 9) 00:07:25.503 14619.569 - 14720.394: 98.5527% ( 4) 00:07:25.503 14720.394 - 14821.218: 98.5815% ( 4) 00:07:25.503 14821.218 - 14922.043: 98.6463% ( 9) 00:07:25.503 14922.043 - 15022.868: 98.6823% ( 5) 00:07:25.503 15022.868 - 15123.692: 98.7183% ( 5) 00:07:25.503 15123.692 - 15224.517: 98.7471% ( 4) 00:07:25.503 15224.517 - 15325.342: 98.7759% ( 4) 00:07:25.503 15325.342 - 15426.166: 98.8551% ( 11) 00:07:25.503 15426.166 - 15526.991: 98.9487% ( 13) 00:07:25.503 15526.991 - 15627.815: 99.0351% ( 12) 00:07:25.503 15627.815 - 15728.640: 99.0711% ( 5) 00:07:25.503 15728.640 - 15829.465: 99.0783% ( 1) 00:07:25.503 26617.698 - 26819.348: 99.0927% ( 2) 00:07:25.503 26819.348 - 27020.997: 99.1431% ( 7) 00:07:25.503 27020.997 - 27222.646: 99.2007% ( 8) 00:07:25.503 27222.646 - 27424.295: 99.2584% ( 8) 00:07:25.503 27424.295 - 27625.945: 99.3016% ( 6) 00:07:25.503 27625.945 - 27827.594: 99.3520% ( 7) 00:07:25.503 27827.594 - 28029.243: 99.4024% ( 7) 00:07:25.504 28029.243 - 28230.892: 99.4528% ( 7) 00:07:25.504 28230.892 - 28432.542: 99.5104% ( 8) 00:07:25.504 28432.542 - 28634.191: 99.5392% ( 4) 00:07:25.504 32868.825 - 33070.474: 99.5464% ( 1) 00:07:25.504 33070.474 - 33272.123: 99.6040% ( 8) 00:07:25.504 33272.123 - 33473.772: 99.6544% ( 7) 00:07:25.504 33473.772 - 33675.422: 99.7048% ( 7) 00:07:25.504 33675.422 - 33877.071: 99.7552% ( 7) 00:07:25.504 33877.071 - 34078.720: 99.8056% ( 7) 00:07:25.504 34078.720 - 34280.369: 99.8560% ( 7) 00:07:25.504 34280.369 - 34482.018: 99.9064% ( 7) 00:07:25.504 34482.018 - 34683.668: 99.9640% ( 8) 00:07:25.504 34683.668 - 34885.317: 100.0000% ( 5) 00:07:25.504 00:07:25.504 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:25.504 ============================================================================== 00:07:25.504 Range in us Cumulative IO count 00:07:25.504 6251.126 - 6276.332: 0.0072% ( 1) 00:07:25.504 6301.538 - 6326.745: 0.0144% ( 1) 00:07:25.504 6326.745 - 6351.951: 0.0288% ( 2) 00:07:25.504 6351.951 - 6377.157: 0.0432% ( 2) 00:07:25.504 6377.157 - 6402.363: 0.1080% ( 9) 00:07:25.504 6402.363 - 6427.569: 0.1512% ( 6) 00:07:25.504 6427.569 - 6452.775: 0.2232% ( 10) 00:07:25.504 6452.775 - 6503.188: 0.5112% ( 40) 00:07:25.504 6503.188 - 6553.600: 0.7921% ( 39) 00:07:25.504 6553.600 - 6604.012: 0.9361% ( 20) 00:07:25.504 6604.012 - 6654.425: 1.1809% ( 34) 00:07:25.504 6654.425 - 6704.837: 1.3465% ( 23) 00:07:25.504 6704.837 - 6755.249: 1.4689% ( 17) 00:07:25.504 6755.249 - 6805.662: 1.5985% ( 18) 00:07:25.504 6805.662 - 6856.074: 1.8145% ( 30) 00:07:25.504 6856.074 - 6906.486: 2.0017% ( 26) 00:07:25.504 6906.486 - 6956.898: 2.2393% ( 33) 00:07:25.504 6956.898 - 7007.311: 2.5562% ( 44) 00:07:25.504 7007.311 - 7057.723: 3.0746% ( 72) 00:07:25.504 7057.723 - 7108.135: 3.8954% ( 114) 00:07:25.504 7108.135 - 7158.548: 4.5291% ( 88) 00:07:25.504 7158.548 - 7208.960: 5.3139% ( 109) 00:07:25.504 7208.960 - 7259.372: 6.0268% ( 99) 00:07:25.504 7259.372 - 7309.785: 6.8980% ( 121) 00:07:25.504 7309.785 - 7360.197: 7.4741% ( 80) 00:07:25.504 7360.197 - 7410.609: 8.1725% ( 97) 00:07:25.504 7410.609 - 7461.022: 8.6190% ( 62) 00:07:25.504 7461.022 - 7511.434: 9.0654% ( 62) 00:07:25.504 7511.434 - 7561.846: 9.5190% ( 63) 00:07:25.504 7561.846 - 7612.258: 9.9798% ( 64) 00:07:25.504 7612.258 - 7662.671: 10.3831% ( 56) 00:07:25.504 7662.671 - 7713.083: 10.7143% ( 46) 00:07:25.504 7713.083 - 7763.495: 11.1391% ( 59) 00:07:25.504 7763.495 - 7813.908: 11.7079% ( 79) 00:07:25.504 7813.908 - 7864.320: 12.2840% ( 80) 00:07:25.504 7864.320 - 7914.732: 12.9896% ( 98) 00:07:25.504 7914.732 - 7965.145: 13.6521% ( 92) 00:07:25.504 7965.145 - 8015.557: 14.7465% ( 152) 00:07:25.504 8015.557 - 8065.969: 16.2082% ( 203) 00:07:25.504 8065.969 - 8116.382: 17.5619% ( 188) 00:07:25.504 8116.382 - 8166.794: 18.8292% ( 176) 00:07:25.504 8166.794 - 8217.206: 20.3701% ( 214) 00:07:25.504 8217.206 - 8267.618: 22.3790% ( 279) 00:07:25.504 8267.618 - 8318.031: 24.1143% ( 241) 00:07:25.504 8318.031 - 8368.443: 26.0081% ( 263) 00:07:25.504 8368.443 - 8418.855: 28.0602% ( 285) 00:07:25.504 8418.855 - 8469.268: 30.4435% ( 331) 00:07:25.504 8469.268 - 8519.680: 32.8269% ( 331) 00:07:25.504 8519.680 - 8570.092: 35.3399% ( 349) 00:07:25.504 8570.092 - 8620.505: 38.1480% ( 390) 00:07:25.504 8620.505 - 8670.917: 40.5890% ( 339) 00:07:25.504 8670.917 - 8721.329: 43.5556% ( 412) 00:07:25.504 8721.329 - 8771.742: 46.9254% ( 468) 00:07:25.504 8771.742 - 8822.154: 50.0792% ( 438) 00:07:25.504 8822.154 - 8872.566: 53.1466% ( 426) 00:07:25.504 8872.566 - 8922.978: 56.1276% ( 414) 00:07:25.504 8922.978 - 8973.391: 59.0654% ( 408) 00:07:25.504 8973.391 - 9023.803: 61.9744% ( 404) 00:07:25.504 9023.803 - 9074.215: 64.7681% ( 388) 00:07:25.504 9074.215 - 9124.628: 67.1227% ( 327) 00:07:25.504 9124.628 - 9175.040: 69.1388% ( 280) 00:07:25.504 9175.040 - 9225.452: 70.9461% ( 251) 00:07:25.504 9225.452 - 9275.865: 72.8687% ( 267) 00:07:25.504 9275.865 - 9326.277: 74.2872% ( 197) 00:07:25.504 9326.277 - 9376.689: 75.4968% ( 168) 00:07:25.504 9376.689 - 9427.102: 76.4689% ( 135) 00:07:25.504 9427.102 - 9477.514: 77.2825% ( 113) 00:07:25.504 9477.514 - 9527.926: 78.1322% ( 118) 00:07:25.504 9527.926 - 9578.338: 78.8594% ( 101) 00:07:25.504 9578.338 - 9628.751: 79.6299% ( 107) 00:07:25.504 9628.751 - 9679.163: 80.3427% ( 99) 00:07:25.504 9679.163 - 9729.575: 81.0484% ( 98) 00:07:25.504 9729.575 - 9779.988: 81.8260% ( 108) 00:07:25.504 9779.988 - 9830.400: 82.6109% ( 109) 00:07:25.504 9830.400 - 9880.812: 83.3381% ( 101) 00:07:25.504 9880.812 - 9931.225: 83.8854% ( 76) 00:07:25.504 9931.225 - 9981.637: 84.4542% ( 79) 00:07:25.504 9981.637 - 10032.049: 84.9582% ( 70) 00:07:25.504 10032.049 - 10082.462: 85.4047% ( 62) 00:07:25.504 10082.462 - 10132.874: 85.9159% ( 71) 00:07:25.504 10132.874 - 10183.286: 86.2975% ( 53) 00:07:25.504 10183.286 - 10233.698: 86.7728% ( 66) 00:07:25.504 10233.698 - 10284.111: 87.2768% ( 70) 00:07:25.504 10284.111 - 10334.523: 87.7304% ( 63) 00:07:25.504 10334.523 - 10384.935: 88.1336% ( 56) 00:07:25.504 10384.935 - 10435.348: 88.5225% ( 54) 00:07:25.504 10435.348 - 10485.760: 88.8897% ( 51) 00:07:25.504 10485.760 - 10536.172: 89.5161% ( 87) 00:07:25.504 10536.172 - 10586.585: 89.7897% ( 38) 00:07:25.504 10586.585 - 10636.997: 90.0778% ( 40) 00:07:25.504 10636.997 - 10687.409: 90.3730% ( 41) 00:07:25.504 10687.409 - 10737.822: 90.6970% ( 45) 00:07:25.504 10737.822 - 10788.234: 91.1434% ( 62) 00:07:25.504 10788.234 - 10838.646: 91.4819% ( 47) 00:07:25.504 10838.646 - 10889.058: 91.8563% ( 52) 00:07:25.504 10889.058 - 10939.471: 92.0435% ( 26) 00:07:25.504 10939.471 - 10989.883: 92.3099% ( 37) 00:07:25.504 10989.883 - 11040.295: 92.4683% ( 22) 00:07:25.504 11040.295 - 11090.708: 92.6051% ( 19) 00:07:25.504 11090.708 - 11141.120: 92.8499% ( 34) 00:07:25.504 11141.120 - 11191.532: 93.0516% ( 28) 00:07:25.504 11191.532 - 11241.945: 93.2676% ( 30) 00:07:25.504 11241.945 - 11292.357: 93.3828% ( 16) 00:07:25.504 11292.357 - 11342.769: 93.4764% ( 13) 00:07:25.504 11342.769 - 11393.182: 93.5916% ( 16) 00:07:25.504 11393.182 - 11443.594: 93.8220% ( 32) 00:07:25.504 11443.594 - 11494.006: 93.9300% ( 15) 00:07:25.504 11494.006 - 11544.418: 93.9948% ( 9) 00:07:25.504 11544.418 - 11594.831: 94.0668% ( 10) 00:07:25.504 11594.831 - 11645.243: 94.1244% ( 8) 00:07:25.504 11645.243 - 11695.655: 94.1604% ( 5) 00:07:25.504 11695.655 - 11746.068: 94.1820% ( 3) 00:07:25.504 11746.068 - 11796.480: 94.2036% ( 3) 00:07:25.504 11796.480 - 11846.892: 94.2324% ( 4) 00:07:25.504 11846.892 - 11897.305: 94.2612% ( 4) 00:07:25.504 11897.305 - 11947.717: 94.2828% ( 3) 00:07:25.504 11947.717 - 11998.129: 94.3836% ( 14) 00:07:25.504 11998.129 - 12048.542: 94.5204% ( 19) 00:07:25.504 12048.542 - 12098.954: 94.6141% ( 13) 00:07:25.504 12098.954 - 12149.366: 94.7293% ( 16) 00:07:25.504 12149.366 - 12199.778: 94.8229% ( 13) 00:07:25.504 12199.778 - 12250.191: 94.9165% ( 13) 00:07:25.504 12250.191 - 12300.603: 95.0317% ( 16) 00:07:25.504 12300.603 - 12351.015: 95.1541% ( 17) 00:07:25.504 12351.015 - 12401.428: 95.3197% ( 23) 00:07:25.504 12401.428 - 12451.840: 95.4349% ( 16) 00:07:25.504 12451.840 - 12502.252: 95.5717% ( 19) 00:07:25.504 12502.252 - 12552.665: 95.7661% ( 27) 00:07:25.504 12552.665 - 12603.077: 96.0253% ( 36) 00:07:25.504 12603.077 - 12653.489: 96.2486% ( 31) 00:07:25.504 12653.489 - 12703.902: 96.4430% ( 27) 00:07:25.504 12703.902 - 12754.314: 96.5942% ( 21) 00:07:25.504 12754.314 - 12804.726: 96.7022% ( 15) 00:07:25.504 12804.726 - 12855.138: 96.7886% ( 12) 00:07:25.504 12855.138 - 12905.551: 96.8750% ( 12) 00:07:25.504 12905.551 - 13006.375: 97.0046% ( 18) 00:07:25.504 13006.375 - 13107.200: 97.0910% ( 12) 00:07:25.504 13107.200 - 13208.025: 97.1414% ( 7) 00:07:25.504 13208.025 - 13308.849: 97.2638% ( 17) 00:07:25.504 13308.849 - 13409.674: 97.3214% ( 8) 00:07:25.504 13409.674 - 13510.498: 97.3862% ( 9) 00:07:25.504 13510.498 - 13611.323: 97.4294% ( 6) 00:07:25.504 13611.323 - 13712.148: 97.4726% ( 6) 00:07:25.504 13712.148 - 13812.972: 97.5878% ( 16) 00:07:25.504 13812.972 - 13913.797: 97.7895% ( 28) 00:07:25.504 13913.797 - 14014.622: 97.8615% ( 10) 00:07:25.504 14014.622 - 14115.446: 97.9911% ( 18) 00:07:25.504 14115.446 - 14216.271: 98.2143% ( 31) 00:07:25.504 14216.271 - 14317.095: 98.3871% ( 24) 00:07:25.504 14317.095 - 14417.920: 98.4663% ( 11) 00:07:25.504 14417.920 - 14518.745: 98.5167% ( 7) 00:07:25.504 14518.745 - 14619.569: 98.5383% ( 3) 00:07:25.504 14619.569 - 14720.394: 98.5599% ( 3) 00:07:25.504 14720.394 - 14821.218: 98.5887% ( 4) 00:07:25.504 14821.218 - 14922.043: 98.6175% ( 4) 00:07:25.504 15224.517 - 15325.342: 98.6247% ( 1) 00:07:25.504 15426.166 - 15526.991: 98.6319% ( 1) 00:07:25.504 15526.991 - 15627.815: 98.6895% ( 8) 00:07:25.504 15627.815 - 15728.640: 98.7903% ( 14) 00:07:25.504 15728.640 - 15829.465: 98.9919% ( 28) 00:07:25.504 15829.465 - 15930.289: 99.0351% ( 6) 00:07:25.504 15930.289 - 16031.114: 99.0783% ( 6) 00:07:25.504 25508.628 - 25609.452: 99.0855% ( 1) 00:07:25.504 25609.452 - 25710.277: 99.1071% ( 3) 00:07:25.504 25710.277 - 25811.102: 99.1359% ( 4) 00:07:25.504 25811.102 - 26012.751: 99.1863% ( 7) 00:07:25.504 26012.751 - 26214.400: 99.2368% ( 7) 00:07:25.504 26214.400 - 26416.049: 99.2872% ( 7) 00:07:25.504 26416.049 - 26617.698: 99.3448% ( 8) 00:07:25.504 26617.698 - 26819.348: 99.3952% ( 7) 00:07:25.504 26819.348 - 27020.997: 99.4456% ( 7) 00:07:25.504 27020.997 - 27222.646: 99.5032% ( 8) 00:07:25.504 27222.646 - 27424.295: 99.5392% ( 5) 00:07:25.504 32062.228 - 32263.877: 99.5680% ( 4) 00:07:25.504 32263.877 - 32465.526: 99.6256% ( 8) 00:07:25.504 32465.526 - 32667.175: 99.6760% ( 7) 00:07:25.504 32667.175 - 32868.825: 99.7264% ( 7) 00:07:25.505 32868.825 - 33070.474: 99.7768% ( 7) 00:07:25.505 33070.474 - 33272.123: 99.8344% ( 8) 00:07:25.505 33272.123 - 33473.772: 99.8848% ( 7) 00:07:25.505 33473.772 - 33675.422: 99.9424% ( 8) 00:07:25.505 33675.422 - 33877.071: 99.9928% ( 7) 00:07:25.505 33877.071 - 34078.720: 100.0000% ( 1) 00:07:25.505 00:07:25.505 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:25.505 ============================================================================== 00:07:25.505 Range in us Cumulative IO count 00:07:25.505 6024.271 - 6049.477: 0.0072% ( 1) 00:07:25.505 6150.302 - 6175.508: 0.0144% ( 1) 00:07:25.505 6200.714 - 6225.920: 0.0216% ( 1) 00:07:25.505 6225.920 - 6251.126: 0.0288% ( 1) 00:07:25.505 6251.126 - 6276.332: 0.0432% ( 2) 00:07:25.505 6276.332 - 6301.538: 0.0720% ( 4) 00:07:25.505 6301.538 - 6326.745: 0.0936% ( 3) 00:07:25.505 6326.745 - 6351.951: 0.1224% ( 4) 00:07:25.505 6351.951 - 6377.157: 0.1656% ( 6) 00:07:25.505 6377.157 - 6402.363: 0.2232% ( 8) 00:07:25.505 6402.363 - 6427.569: 0.2952% ( 10) 00:07:25.505 6427.569 - 6452.775: 0.4176% ( 17) 00:07:25.505 6452.775 - 6503.188: 0.5544% ( 19) 00:07:25.505 6503.188 - 6553.600: 0.8281% ( 38) 00:07:25.505 6553.600 - 6604.012: 0.9865% ( 22) 00:07:25.505 6604.012 - 6654.425: 1.1449% ( 22) 00:07:25.505 6654.425 - 6704.837: 1.2601% ( 16) 00:07:25.505 6704.837 - 6755.249: 1.3393% ( 11) 00:07:25.505 6755.249 - 6805.662: 1.4113% ( 10) 00:07:25.505 6805.662 - 6856.074: 1.5337% ( 17) 00:07:25.505 6856.074 - 6906.486: 1.6993% ( 23) 00:07:25.505 6906.486 - 6956.898: 1.9729% ( 38) 00:07:25.505 6956.898 - 7007.311: 2.5202% ( 76) 00:07:25.505 7007.311 - 7057.723: 3.3266% ( 112) 00:07:25.505 7057.723 - 7108.135: 4.0683% ( 103) 00:07:25.505 7108.135 - 7158.548: 4.8747% ( 112) 00:07:25.505 7158.548 - 7208.960: 5.8180% ( 131) 00:07:25.505 7208.960 - 7259.372: 6.3796% ( 78) 00:07:25.505 7259.372 - 7309.785: 7.2149% ( 116) 00:07:25.505 7309.785 - 7360.197: 7.7549% ( 75) 00:07:25.505 7360.197 - 7410.609: 8.5469% ( 110) 00:07:25.505 7410.609 - 7461.022: 9.0294% ( 67) 00:07:25.505 7461.022 - 7511.434: 9.2886% ( 36) 00:07:25.505 7511.434 - 7561.846: 9.5766% ( 40) 00:07:25.505 7561.846 - 7612.258: 9.9006% ( 45) 00:07:25.505 7612.258 - 7662.671: 10.2103% ( 43) 00:07:25.505 7662.671 - 7713.083: 10.5703% ( 50) 00:07:25.505 7713.083 - 7763.495: 11.1031% ( 74) 00:07:25.505 7763.495 - 7813.908: 11.5063% ( 56) 00:07:25.505 7813.908 - 7864.320: 12.1472% ( 89) 00:07:25.505 7864.320 - 7914.732: 12.8600% ( 99) 00:07:25.505 7914.732 - 7965.145: 13.8897% ( 143) 00:07:25.505 7965.145 - 8015.557: 15.0922% ( 167) 00:07:25.505 8015.557 - 8065.969: 16.4891% ( 194) 00:07:25.505 8065.969 - 8116.382: 17.7203% ( 171) 00:07:25.505 8116.382 - 8166.794: 19.1964% ( 205) 00:07:25.505 8166.794 - 8217.206: 20.7301% ( 213) 00:07:25.505 8217.206 - 8267.618: 22.5734% ( 256) 00:07:25.505 8267.618 - 8318.031: 24.1287% ( 216) 00:07:25.505 8318.031 - 8368.443: 26.2025% ( 288) 00:07:25.505 8368.443 - 8418.855: 28.1250% ( 267) 00:07:25.505 8418.855 - 8469.268: 30.4075% ( 317) 00:07:25.505 8469.268 - 8519.680: 32.5965% ( 304) 00:07:25.505 8519.680 - 8570.092: 35.2463% ( 368) 00:07:25.505 8570.092 - 8620.505: 37.9896% ( 381) 00:07:25.505 8620.505 - 8670.917: 40.7978% ( 390) 00:07:25.505 8670.917 - 8721.329: 43.7428% ( 409) 00:07:25.505 8721.329 - 8771.742: 46.5726% ( 393) 00:07:25.505 8771.742 - 8822.154: 49.6040% ( 421) 00:07:25.505 8822.154 - 8872.566: 53.0386% ( 477) 00:07:25.505 8872.566 - 8922.978: 56.1708% ( 435) 00:07:25.505 8922.978 - 8973.391: 59.3030% ( 435) 00:07:25.505 8973.391 - 9023.803: 62.2984% ( 416) 00:07:25.505 9023.803 - 9074.215: 64.9986% ( 375) 00:07:25.505 9074.215 - 9124.628: 67.4611% ( 342) 00:07:25.505 9124.628 - 9175.040: 69.4844% ( 281) 00:07:25.505 9175.040 - 9225.452: 71.0974% ( 224) 00:07:25.505 9225.452 - 9275.865: 72.8615% ( 245) 00:07:25.505 9275.865 - 9326.277: 74.1575% ( 180) 00:07:25.505 9326.277 - 9376.689: 75.1656% ( 140) 00:07:25.505 9376.689 - 9427.102: 76.0945% ( 129) 00:07:25.505 9427.102 - 9477.514: 76.8865% ( 110) 00:07:25.505 9477.514 - 9527.926: 77.4986% ( 85) 00:07:25.505 9527.926 - 9578.338: 77.9810% ( 67) 00:07:25.505 9578.338 - 9628.751: 78.5138% ( 74) 00:07:25.505 9628.751 - 9679.163: 79.3131% ( 111) 00:07:25.505 9679.163 - 9729.575: 80.1339% ( 114) 00:07:25.505 9729.575 - 9779.988: 80.8828% ( 104) 00:07:25.505 9779.988 - 9830.400: 81.5884% ( 98) 00:07:25.505 9830.400 - 9880.812: 82.2437% ( 91) 00:07:25.505 9880.812 - 9931.225: 83.0069% ( 106) 00:07:25.505 9931.225 - 9981.637: 83.7126% ( 98) 00:07:25.505 9981.637 - 10032.049: 84.5118% ( 111) 00:07:25.505 10032.049 - 10082.462: 85.1382% ( 87) 00:07:25.505 10082.462 - 10132.874: 85.7863% ( 90) 00:07:25.505 10132.874 - 10183.286: 86.2903% ( 70) 00:07:25.505 10183.286 - 10233.698: 86.7584% ( 65) 00:07:25.505 10233.698 - 10284.111: 87.4568% ( 97) 00:07:25.505 10284.111 - 10334.523: 88.0688% ( 85) 00:07:25.505 10334.523 - 10384.935: 88.4865% ( 58) 00:07:25.505 10384.935 - 10435.348: 88.8249% ( 47) 00:07:25.505 10435.348 - 10485.760: 89.1273% ( 42) 00:07:25.505 10485.760 - 10536.172: 89.3865% ( 36) 00:07:25.505 10536.172 - 10586.585: 89.6385% ( 35) 00:07:25.505 10586.585 - 10636.997: 89.8618% ( 31) 00:07:25.505 10636.997 - 10687.409: 90.0418% ( 25) 00:07:25.505 10687.409 - 10737.822: 90.2506% ( 29) 00:07:25.505 10737.822 - 10788.234: 90.5026% ( 35) 00:07:25.505 10788.234 - 10838.646: 91.1074% ( 84) 00:07:25.505 10838.646 - 10889.058: 91.4531% ( 48) 00:07:25.505 10889.058 - 10939.471: 91.8131% ( 50) 00:07:25.505 10939.471 - 10989.883: 92.1731% ( 50) 00:07:25.505 10989.883 - 11040.295: 92.4683% ( 41) 00:07:25.505 11040.295 - 11090.708: 92.7635% ( 41) 00:07:25.505 11090.708 - 11141.120: 93.0588% ( 41) 00:07:25.505 11141.120 - 11191.532: 93.2604% ( 28) 00:07:25.505 11191.532 - 11241.945: 93.4908% ( 32) 00:07:25.505 11241.945 - 11292.357: 93.6492% ( 22) 00:07:25.505 11292.357 - 11342.769: 93.7572% ( 15) 00:07:25.505 11342.769 - 11393.182: 93.8508% ( 13) 00:07:25.505 11393.182 - 11443.594: 93.9228% ( 10) 00:07:25.505 11443.594 - 11494.006: 93.9876% ( 9) 00:07:25.505 11494.006 - 11544.418: 94.0740% ( 12) 00:07:25.505 11544.418 - 11594.831: 94.1892% ( 16) 00:07:25.505 11594.831 - 11645.243: 94.2972% ( 15) 00:07:25.505 11645.243 - 11695.655: 94.4052% ( 15) 00:07:25.505 11695.655 - 11746.068: 94.5204% ( 16) 00:07:25.505 11746.068 - 11796.480: 94.7221% ( 28) 00:07:25.505 11796.480 - 11846.892: 94.8157% ( 13) 00:07:25.505 11846.892 - 11897.305: 94.8949% ( 11) 00:07:25.505 11897.305 - 11947.717: 94.9453% ( 7) 00:07:25.505 11947.717 - 11998.129: 95.0029% ( 8) 00:07:25.505 11998.129 - 12048.542: 95.0389% ( 5) 00:07:25.505 12048.542 - 12098.954: 95.0821% ( 6) 00:07:25.505 12098.954 - 12149.366: 95.1253% ( 6) 00:07:25.505 12149.366 - 12199.778: 95.1901% ( 9) 00:07:25.505 12199.778 - 12250.191: 95.2549% ( 9) 00:07:25.505 12250.191 - 12300.603: 95.3413% ( 12) 00:07:25.505 12300.603 - 12351.015: 95.4133% ( 10) 00:07:25.505 12351.015 - 12401.428: 95.5141% ( 14) 00:07:25.505 12401.428 - 12451.840: 95.5573% ( 6) 00:07:25.505 12451.840 - 12502.252: 95.6149% ( 8) 00:07:25.505 12502.252 - 12552.665: 95.6725% ( 8) 00:07:25.505 12552.665 - 12603.077: 95.7445% ( 10) 00:07:25.505 12603.077 - 12653.489: 95.8453% ( 14) 00:07:25.505 12653.489 - 12703.902: 95.9461% ( 14) 00:07:25.505 12703.902 - 12754.314: 96.0181% ( 10) 00:07:25.505 12754.314 - 12804.726: 96.0757% ( 8) 00:07:25.505 12804.726 - 12855.138: 96.1550% ( 11) 00:07:25.505 12855.138 - 12905.551: 96.2054% ( 7) 00:07:25.505 12905.551 - 13006.375: 96.4142% ( 29) 00:07:25.505 13006.375 - 13107.200: 96.6878% ( 38) 00:07:25.505 13107.200 - 13208.025: 96.9470% ( 36) 00:07:25.505 13208.025 - 13308.849: 97.3646% ( 58) 00:07:25.505 13308.849 - 13409.674: 97.6166% ( 35) 00:07:25.505 13409.674 - 13510.498: 97.7895% ( 24) 00:07:25.505 13510.498 - 13611.323: 97.9479% ( 22) 00:07:25.505 13611.323 - 13712.148: 98.0199% ( 10) 00:07:25.505 13712.148 - 13812.972: 98.0631% ( 6) 00:07:25.505 13812.972 - 13913.797: 98.0991% ( 5) 00:07:25.505 13913.797 - 14014.622: 98.1495% ( 7) 00:07:25.505 14014.622 - 14115.446: 98.2359% ( 12) 00:07:25.505 14115.446 - 14216.271: 98.3079% ( 10) 00:07:25.505 14216.271 - 14317.095: 98.4879% ( 25) 00:07:25.505 14317.095 - 14417.920: 98.5311% ( 6) 00:07:25.505 14417.920 - 14518.745: 98.5815% ( 7) 00:07:25.505 14518.745 - 14619.569: 98.6175% ( 5) 00:07:25.506 14922.043 - 15022.868: 98.6247% ( 1) 00:07:25.506 15325.342 - 15426.166: 98.6535% ( 4) 00:07:25.506 15426.166 - 15526.991: 98.7111% ( 8) 00:07:25.506 15526.991 - 15627.815: 98.8119% ( 14) 00:07:25.506 15627.815 - 15728.640: 98.8839% ( 10) 00:07:25.506 15728.640 - 15829.465: 98.9127% ( 4) 00:07:25.506 15829.465 - 15930.289: 98.9343% ( 3) 00:07:25.506 15930.289 - 16031.114: 98.9703% ( 5) 00:07:25.506 16031.114 - 16131.938: 98.9991% ( 4) 00:07:25.506 16131.938 - 16232.763: 99.0351% ( 5) 00:07:25.506 16232.763 - 16333.588: 99.0711% ( 5) 00:07:25.506 16333.588 - 16434.412: 99.0783% ( 1) 00:07:25.506 23592.960 - 23693.785: 99.0855% ( 1) 00:07:25.506 23693.785 - 23794.609: 99.1071% ( 3) 00:07:25.506 23794.609 - 23895.434: 99.1359% ( 4) 00:07:25.506 23895.434 - 23996.258: 99.1575% ( 3) 00:07:25.506 23996.258 - 24097.083: 99.1863% ( 4) 00:07:25.506 24097.083 - 24197.908: 99.2151% ( 4) 00:07:25.506 24197.908 - 24298.732: 99.2368% ( 3) 00:07:25.506 24298.732 - 24399.557: 99.2656% ( 4) 00:07:25.506 24399.557 - 24500.382: 99.2944% ( 4) 00:07:25.506 24500.382 - 24601.206: 99.3160% ( 3) 00:07:25.506 24601.206 - 24702.031: 99.3448% ( 4) 00:07:25.506 24702.031 - 24802.855: 99.3736% ( 4) 00:07:25.506 24802.855 - 24903.680: 99.4024% ( 4) 00:07:25.506 24903.680 - 25004.505: 99.4240% ( 3) 00:07:25.506 25004.505 - 25105.329: 99.4528% ( 4) 00:07:25.506 25105.329 - 25206.154: 99.4816% ( 4) 00:07:25.506 25206.154 - 25306.978: 99.5032% ( 3) 00:07:25.506 25306.978 - 25407.803: 99.5320% ( 4) 00:07:25.506 25407.803 - 25508.628: 99.5392% ( 1) 00:07:25.506 30449.034 - 30650.683: 99.5824% ( 6) 00:07:25.506 30650.683 - 30852.332: 99.6328% ( 7) 00:07:25.506 30852.332 - 31053.982: 99.6832% ( 7) 00:07:25.506 31053.982 - 31255.631: 99.7336% ( 7) 00:07:25.506 31255.631 - 31457.280: 99.7840% ( 7) 00:07:25.506 31457.280 - 31658.929: 99.8344% ( 7) 00:07:25.506 31658.929 - 31860.578: 99.8848% ( 7) 00:07:25.506 31860.578 - 32062.228: 99.9424% ( 8) 00:07:25.506 32062.228 - 32263.877: 99.9928% ( 7) 00:07:25.506 32263.877 - 32465.526: 100.0000% ( 1) 00:07:25.506 00:07:25.506 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:25.506 ============================================================================== 00:07:25.506 Range in us Cumulative IO count 00:07:25.506 6024.271 - 6049.477: 0.0072% ( 1) 00:07:25.506 6175.508 - 6200.714: 0.0143% ( 1) 00:07:25.506 6251.126 - 6276.332: 0.0287% ( 2) 00:07:25.506 6276.332 - 6301.538: 0.0430% ( 2) 00:07:25.506 6301.538 - 6326.745: 0.0788% ( 5) 00:07:25.506 6326.745 - 6351.951: 0.1290% ( 7) 00:07:25.506 6351.951 - 6377.157: 0.1792% ( 7) 00:07:25.506 6377.157 - 6402.363: 0.2365% ( 8) 00:07:25.506 6402.363 - 6427.569: 0.2867% ( 7) 00:07:25.506 6427.569 - 6452.775: 0.4014% ( 16) 00:07:25.506 6452.775 - 6503.188: 0.5806% ( 25) 00:07:25.506 6503.188 - 6553.600: 0.7884% ( 29) 00:07:25.506 6553.600 - 6604.012: 0.9246% ( 19) 00:07:25.506 6604.012 - 6654.425: 1.0608% ( 19) 00:07:25.506 6654.425 - 6704.837: 1.2615% ( 28) 00:07:25.506 6704.837 - 6755.249: 1.3045% ( 6) 00:07:25.506 6755.249 - 6805.662: 1.3690% ( 9) 00:07:25.506 6805.662 - 6856.074: 1.4908% ( 17) 00:07:25.506 6856.074 - 6906.486: 1.7130% ( 31) 00:07:25.506 6906.486 - 6956.898: 2.0499% ( 47) 00:07:25.506 6956.898 - 7007.311: 2.3939% ( 48) 00:07:25.506 7007.311 - 7057.723: 3.2038% ( 113) 00:07:25.506 7057.723 - 7108.135: 4.1428% ( 131) 00:07:25.506 7108.135 - 7158.548: 4.8380% ( 97) 00:07:25.506 7158.548 - 7208.960: 5.5189% ( 95) 00:07:25.506 7208.960 - 7259.372: 6.2142% ( 97) 00:07:25.506 7259.372 - 7309.785: 7.1674% ( 133) 00:07:25.506 7309.785 - 7360.197: 8.1780% ( 141) 00:07:25.506 7360.197 - 7410.609: 8.7084% ( 74) 00:07:25.506 7410.609 - 7461.022: 9.2460% ( 75) 00:07:25.506 7461.022 - 7511.434: 9.6904% ( 62) 00:07:25.506 7511.434 - 7561.846: 9.9914% ( 42) 00:07:25.506 7561.846 - 7612.258: 10.2924% ( 42) 00:07:25.506 7612.258 - 7662.671: 10.5505% ( 36) 00:07:25.506 7662.671 - 7713.083: 10.8157% ( 37) 00:07:25.506 7713.083 - 7763.495: 11.4321% ( 86) 00:07:25.506 7763.495 - 7813.908: 11.8979% ( 65) 00:07:25.506 7813.908 - 7864.320: 12.4068% ( 71) 00:07:25.506 7864.320 - 7914.732: 13.0662% ( 92) 00:07:25.506 7914.732 - 7965.145: 14.0338% ( 135) 00:07:25.506 7965.145 - 8015.557: 15.1519% ( 156) 00:07:25.506 8015.557 - 8065.969: 16.1769% ( 143) 00:07:25.506 8065.969 - 8116.382: 17.3954% ( 170) 00:07:25.506 8116.382 - 8166.794: 18.6712% ( 178) 00:07:25.506 8166.794 - 8217.206: 20.4702% ( 251) 00:07:25.506 8217.206 - 8267.618: 22.2835% ( 253) 00:07:25.506 8267.618 - 8318.031: 24.2259% ( 271) 00:07:25.506 8318.031 - 8368.443: 26.1468% ( 268) 00:07:25.506 8368.443 - 8418.855: 28.3042% ( 301) 00:07:25.506 8418.855 - 8469.268: 30.5906% ( 319) 00:07:25.506 8469.268 - 8519.680: 32.8268% ( 312) 00:07:25.506 8519.680 - 8570.092: 35.5075% ( 374) 00:07:25.506 8570.092 - 8620.505: 38.2096% ( 377) 00:07:25.506 8620.505 - 8670.917: 40.7468% ( 354) 00:07:25.506 8670.917 - 8721.329: 43.3200% ( 359) 00:07:25.506 8721.329 - 8771.742: 45.9934% ( 373) 00:07:25.506 8771.742 - 8822.154: 48.8460% ( 398) 00:07:25.506 8822.154 - 8872.566: 52.4656% ( 505) 00:07:25.506 8872.566 - 8922.978: 55.8200% ( 468) 00:07:25.506 8922.978 - 8973.391: 59.0883% ( 456) 00:07:25.506 8973.391 - 9023.803: 61.5109% ( 338) 00:07:25.506 9023.803 - 9074.215: 64.0983% ( 361) 00:07:25.506 9074.215 - 9124.628: 66.5138% ( 337) 00:07:25.506 9124.628 - 9175.040: 68.6138% ( 293) 00:07:25.506 9175.040 - 9225.452: 70.4845% ( 261) 00:07:25.506 9225.452 - 9275.865: 72.3265% ( 257) 00:07:25.506 9275.865 - 9326.277: 73.6167% ( 180) 00:07:25.506 9326.277 - 9376.689: 74.7921% ( 164) 00:07:25.506 9376.689 - 9427.102: 75.7669% ( 136) 00:07:25.506 9427.102 - 9477.514: 76.6198% ( 119) 00:07:25.506 9477.514 - 9527.926: 77.3868% ( 107) 00:07:25.506 9527.926 - 9578.338: 78.0318% ( 90) 00:07:25.506 9578.338 - 9628.751: 78.5192% ( 68) 00:07:25.506 9628.751 - 9679.163: 79.1356% ( 86) 00:07:25.506 9679.163 - 9729.575: 79.6015% ( 65) 00:07:25.506 9729.575 - 9779.988: 80.2251% ( 87) 00:07:25.506 9779.988 - 9830.400: 80.6981% ( 66) 00:07:25.506 9830.400 - 9880.812: 81.3217% ( 87) 00:07:25.506 9880.812 - 9931.225: 82.0456% ( 101) 00:07:25.506 9931.225 - 9981.637: 82.7408% ( 97) 00:07:25.506 9981.637 - 10032.049: 83.4361% ( 97) 00:07:25.506 10032.049 - 10082.462: 83.8804% ( 62) 00:07:25.506 10082.462 - 10132.874: 84.3822% ( 70) 00:07:25.506 10132.874 - 10183.286: 84.8265% ( 62) 00:07:25.506 10183.286 - 10233.698: 85.2853% ( 64) 00:07:25.506 10233.698 - 10284.111: 85.7583% ( 66) 00:07:25.506 10284.111 - 10334.523: 86.3174% ( 78) 00:07:25.506 10334.523 - 10384.935: 86.9624% ( 90) 00:07:25.506 10384.935 - 10435.348: 87.4570% ( 69) 00:07:25.506 10435.348 - 10485.760: 87.8512% ( 55) 00:07:25.506 10485.760 - 10536.172: 88.2597% ( 57) 00:07:25.506 10536.172 - 10586.585: 88.6396% ( 53) 00:07:25.506 10586.585 - 10636.997: 89.0625% ( 59) 00:07:25.506 10636.997 - 10687.409: 89.6001% ( 75) 00:07:25.506 10687.409 - 10737.822: 90.0373% ( 61) 00:07:25.506 10737.822 - 10788.234: 90.5390% ( 70) 00:07:25.506 10788.234 - 10838.646: 90.9045% ( 51) 00:07:25.506 10838.646 - 10889.058: 91.3202% ( 58) 00:07:25.506 10889.058 - 10939.471: 91.6571% ( 47) 00:07:25.506 10939.471 - 10989.883: 91.9366% ( 39) 00:07:25.506 10989.883 - 11040.295: 92.3022% ( 51) 00:07:25.506 11040.295 - 11090.708: 92.6390% ( 47) 00:07:25.506 11090.708 - 11141.120: 92.8827% ( 34) 00:07:25.506 11141.120 - 11191.532: 93.2053% ( 45) 00:07:25.506 11191.532 - 11241.945: 93.5636% ( 50) 00:07:25.506 11241.945 - 11292.357: 93.7930% ( 32) 00:07:25.506 11292.357 - 11342.769: 93.9937% ( 28) 00:07:25.506 11342.769 - 11393.182: 94.1729% ( 25) 00:07:25.506 11393.182 - 11443.594: 94.2947% ( 17) 00:07:25.506 11443.594 - 11494.006: 94.4524% ( 22) 00:07:25.506 11494.006 - 11544.418: 94.5958% ( 20) 00:07:25.506 11544.418 - 11594.831: 94.7319% ( 19) 00:07:25.506 11594.831 - 11645.243: 94.8968% ( 23) 00:07:25.506 11645.243 - 11695.655: 94.9971% ( 14) 00:07:25.506 11695.655 - 11746.068: 95.0688% ( 10) 00:07:25.506 11746.068 - 11796.480: 95.1333% ( 9) 00:07:25.506 11796.480 - 11846.892: 95.1907% ( 8) 00:07:25.506 11846.892 - 11897.305: 95.2552% ( 9) 00:07:25.506 11897.305 - 11947.717: 95.2910% ( 5) 00:07:25.506 11947.717 - 11998.129: 95.3197% ( 4) 00:07:25.506 11998.129 - 12048.542: 95.3412% ( 3) 00:07:25.506 12048.542 - 12098.954: 95.3627% ( 3) 00:07:25.506 12098.954 - 12149.366: 95.4415% ( 11) 00:07:25.506 12149.366 - 12199.778: 95.5060% ( 9) 00:07:25.506 12199.778 - 12250.191: 95.5562% ( 7) 00:07:25.506 12250.191 - 12300.603: 95.6135% ( 8) 00:07:25.506 12300.603 - 12351.015: 95.6709% ( 8) 00:07:25.506 12351.015 - 12401.428: 95.7354% ( 9) 00:07:25.506 12401.428 - 12451.840: 95.7856% ( 7) 00:07:25.506 12451.840 - 12502.252: 95.8429% ( 8) 00:07:25.506 12502.252 - 12552.665: 95.9504% ( 15) 00:07:25.506 12552.665 - 12603.077: 96.0077% ( 8) 00:07:25.506 12603.077 - 12653.489: 96.0866% ( 11) 00:07:25.507 12653.489 - 12703.902: 96.2013% ( 16) 00:07:25.507 12703.902 - 12754.314: 96.3374% ( 19) 00:07:25.507 12754.314 - 12804.726: 96.4450% ( 15) 00:07:25.507 12804.726 - 12855.138: 96.5238% ( 11) 00:07:25.507 12855.138 - 12905.551: 96.5883% ( 9) 00:07:25.507 12905.551 - 13006.375: 96.7962% ( 29) 00:07:25.507 13006.375 - 13107.200: 97.0183% ( 31) 00:07:25.507 13107.200 - 13208.025: 97.1760% ( 22) 00:07:25.507 13208.025 - 13308.849: 97.2979% ( 17) 00:07:25.507 13308.849 - 13409.674: 97.4914% ( 27) 00:07:25.507 13409.674 - 13510.498: 97.6562% ( 23) 00:07:25.507 13510.498 - 13611.323: 97.7924% ( 19) 00:07:25.507 13611.323 - 13712.148: 97.8928% ( 14) 00:07:25.507 13712.148 - 13812.972: 97.9788% ( 12) 00:07:25.507 13812.972 - 13913.797: 98.0648% ( 12) 00:07:25.507 13913.797 - 14014.622: 98.1150% ( 7) 00:07:25.507 14014.622 - 14115.446: 98.1508% ( 5) 00:07:25.507 14115.446 - 14216.271: 98.1651% ( 2) 00:07:25.507 14216.271 - 14317.095: 98.2010% ( 5) 00:07:25.507 14317.095 - 14417.920: 98.2440% ( 6) 00:07:25.507 14417.920 - 14518.745: 98.4303% ( 26) 00:07:25.507 14518.745 - 14619.569: 98.5378% ( 15) 00:07:25.507 14619.569 - 14720.394: 98.5808% ( 6) 00:07:25.507 14720.394 - 14821.218: 98.6239% ( 6) 00:07:25.507 14922.043 - 15022.868: 98.6669% ( 6) 00:07:25.507 15022.868 - 15123.692: 98.7242% ( 8) 00:07:25.507 15123.692 - 15224.517: 98.8030% ( 11) 00:07:25.507 15224.517 - 15325.342: 98.9034% ( 14) 00:07:25.507 15325.342 - 15426.166: 98.9177% ( 2) 00:07:25.507 15426.166 - 15526.991: 98.9321% ( 2) 00:07:25.507 15526.991 - 15627.815: 98.9536% ( 3) 00:07:25.507 15627.815 - 15728.640: 98.9822% ( 4) 00:07:25.507 15728.640 - 15829.465: 99.0037% ( 3) 00:07:25.507 15829.465 - 15930.289: 99.0467% ( 6) 00:07:25.507 15930.289 - 16031.114: 99.0682% ( 3) 00:07:25.507 16031.114 - 16131.938: 99.0826% ( 2) 00:07:25.507 16434.412 - 16535.237: 99.1041% ( 3) 00:07:25.507 16535.237 - 16636.062: 99.1256% ( 3) 00:07:25.507 16636.062 - 16736.886: 99.1542% ( 4) 00:07:25.507 16736.886 - 16837.711: 99.1829% ( 4) 00:07:25.507 16837.711 - 16938.535: 99.2044% ( 3) 00:07:25.507 16938.535 - 17039.360: 99.2331% ( 4) 00:07:25.507 17039.360 - 17140.185: 99.2546% ( 3) 00:07:25.507 17140.185 - 17241.009: 99.2833% ( 4) 00:07:25.507 17241.009 - 17341.834: 99.3048% ( 3) 00:07:25.507 17341.834 - 17442.658: 99.3334% ( 4) 00:07:25.507 17442.658 - 17543.483: 99.3621% ( 4) 00:07:25.507 17543.483 - 17644.308: 99.3908% ( 4) 00:07:25.507 17644.308 - 17745.132: 99.4123% ( 3) 00:07:25.507 17745.132 - 17845.957: 99.4409% ( 4) 00:07:25.507 17845.957 - 17946.782: 99.4696% ( 4) 00:07:25.507 17946.782 - 18047.606: 99.4911% ( 3) 00:07:25.507 18047.606 - 18148.431: 99.5198% ( 4) 00:07:25.507 18148.431 - 18249.255: 99.5413% ( 3) 00:07:25.507 23189.662 - 23290.486: 99.5628% ( 3) 00:07:25.507 23290.486 - 23391.311: 99.5843% ( 3) 00:07:25.507 23391.311 - 23492.135: 99.6130% ( 4) 00:07:25.507 23492.135 - 23592.960: 99.6416% ( 4) 00:07:25.507 23592.960 - 23693.785: 99.6631% ( 3) 00:07:25.507 23693.785 - 23794.609: 99.6918% ( 4) 00:07:25.507 23794.609 - 23895.434: 99.7133% ( 3) 00:07:25.507 23895.434 - 23996.258: 99.7420% ( 4) 00:07:25.507 23996.258 - 24097.083: 99.7706% ( 4) 00:07:25.507 24097.083 - 24197.908: 99.7921% ( 3) 00:07:25.507 24197.908 - 24298.732: 99.8208% ( 4) 00:07:25.507 24298.732 - 24399.557: 99.8495% ( 4) 00:07:25.507 24399.557 - 24500.382: 99.8710% ( 3) 00:07:25.507 24500.382 - 24601.206: 99.8997% ( 4) 00:07:25.507 24601.206 - 24702.031: 99.9283% ( 4) 00:07:25.507 24702.031 - 24802.855: 99.9570% ( 4) 00:07:25.507 24802.855 - 24903.680: 99.9785% ( 3) 00:07:25.507 24903.680 - 25004.505: 100.0000% ( 3) 00:07:25.507 00:07:25.507 ************************************ 00:07:25.507 END TEST nvme_perf 00:07:25.507 ************************************ 00:07:25.507 22:48:04 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:25.507 00:07:25.507 real 0m2.531s 00:07:25.507 user 0m2.213s 00:07:25.507 sys 0m0.215s 00:07:25.507 22:48:04 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.507 22:48:04 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.507 22:48:04 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:25.507 22:48:04 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:25.507 22:48:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.507 22:48:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.507 ************************************ 00:07:25.507 START TEST nvme_hello_world 00:07:25.507 ************************************ 00:07:25.507 22:48:04 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:25.507 Initializing NVMe Controllers 00:07:25.507 Attached to 0000:00:13.0 00:07:25.507 Namespace ID: 1 size: 1GB 00:07:25.507 Attached to 0000:00:10.0 00:07:25.507 Namespace ID: 1 size: 6GB 00:07:25.507 Attached to 0000:00:11.0 00:07:25.507 Namespace ID: 1 size: 5GB 00:07:25.507 Attached to 0000:00:12.0 00:07:25.507 Namespace ID: 1 size: 4GB 00:07:25.507 Namespace ID: 2 size: 4GB 00:07:25.507 Namespace ID: 3 size: 4GB 00:07:25.507 Initialization complete. 00:07:25.507 INFO: using host memory buffer for IO 00:07:25.507 Hello world! 00:07:25.507 INFO: using host memory buffer for IO 00:07:25.507 Hello world! 00:07:25.507 INFO: using host memory buffer for IO 00:07:25.507 Hello world! 00:07:25.507 INFO: using host memory buffer for IO 00:07:25.507 Hello world! 00:07:25.507 INFO: using host memory buffer for IO 00:07:25.507 Hello world! 00:07:25.507 INFO: using host memory buffer for IO 00:07:25.507 Hello world! 00:07:25.507 ************************************ 00:07:25.507 END TEST nvme_hello_world 00:07:25.507 ************************************ 00:07:25.507 00:07:25.507 real 0m0.241s 00:07:25.507 user 0m0.088s 00:07:25.507 sys 0m0.101s 00:07:25.507 22:48:04 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.507 22:48:04 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:25.768 22:48:04 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:25.768 22:48:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.768 22:48:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.768 22:48:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.768 ************************************ 00:07:25.768 START TEST nvme_sgl 00:07:25.768 ************************************ 00:07:25.768 22:48:04 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:25.768 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:25.768 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:25.768 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:25.768 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:25.768 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:25.768 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:25.768 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:25.768 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:25.768 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:25.768 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:25.768 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:25.768 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:25.768 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:25.768 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:25.768 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:25.768 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:25.768 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:25.768 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:25.768 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:25.768 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:25.768 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:26.028 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:26.028 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:26.028 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:26.028 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:26.028 NVMe Readv/Writev Request test 00:07:26.028 Attached to 0000:00:13.0 00:07:26.028 Attached to 0000:00:10.0 00:07:26.028 Attached to 0000:00:11.0 00:07:26.028 Attached to 0000:00:12.0 00:07:26.028 0000:00:10.0: build_io_request_2 test passed 00:07:26.028 0000:00:10.0: build_io_request_4 test passed 00:07:26.028 0000:00:10.0: build_io_request_5 test passed 00:07:26.028 0000:00:10.0: build_io_request_6 test passed 00:07:26.028 0000:00:10.0: build_io_request_7 test passed 00:07:26.028 0000:00:10.0: build_io_request_10 test passed 00:07:26.028 0000:00:11.0: build_io_request_2 test passed 00:07:26.028 0000:00:11.0: build_io_request_4 test passed 00:07:26.028 0000:00:11.0: build_io_request_5 test passed 00:07:26.028 0000:00:11.0: build_io_request_6 test passed 00:07:26.029 0000:00:11.0: build_io_request_7 test passed 00:07:26.029 0000:00:11.0: build_io_request_10 test passed 00:07:26.029 Cleaning up... 00:07:26.029 ************************************ 00:07:26.029 END TEST nvme_sgl 00:07:26.029 ************************************ 00:07:26.029 00:07:26.029 real 0m0.303s 00:07:26.029 user 0m0.164s 00:07:26.029 sys 0m0.095s 00:07:26.029 22:48:04 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.029 22:48:04 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:26.029 22:48:04 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:26.029 22:48:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.029 22:48:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.029 22:48:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.029 ************************************ 00:07:26.029 START TEST nvme_e2edp 00:07:26.029 ************************************ 00:07:26.029 22:48:05 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:26.287 NVMe Write/Read with End-to-End data protection test 00:07:26.287 Attached to 0000:00:13.0 00:07:26.287 Attached to 0000:00:10.0 00:07:26.287 Attached to 0000:00:11.0 00:07:26.287 Attached to 0000:00:12.0 00:07:26.287 Cleaning up... 00:07:26.287 ************************************ 00:07:26.287 END TEST nvme_e2edp 00:07:26.287 ************************************ 00:07:26.287 00:07:26.287 real 0m0.211s 00:07:26.287 user 0m0.074s 00:07:26.287 sys 0m0.094s 00:07:26.287 22:48:05 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.287 22:48:05 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:26.287 22:48:05 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:26.287 22:48:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.287 22:48:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.287 22:48:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.287 ************************************ 00:07:26.287 START TEST nvme_reserve 00:07:26.287 ************************************ 00:07:26.287 22:48:05 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:26.545 ===================================================== 00:07:26.545 NVMe Controller at PCI bus 0, device 19, function 0 00:07:26.545 ===================================================== 00:07:26.545 Reservations: Not Supported 00:07:26.545 ===================================================== 00:07:26.545 NVMe Controller at PCI bus 0, device 16, function 0 00:07:26.545 ===================================================== 00:07:26.545 Reservations: Not Supported 00:07:26.545 ===================================================== 00:07:26.545 NVMe Controller at PCI bus 0, device 17, function 0 00:07:26.545 ===================================================== 00:07:26.545 Reservations: Not Supported 00:07:26.545 ===================================================== 00:07:26.545 NVMe Controller at PCI bus 0, device 18, function 0 00:07:26.545 ===================================================== 00:07:26.545 Reservations: Not Supported 00:07:26.545 Reservation test passed 00:07:26.545 ************************************ 00:07:26.545 END TEST nvme_reserve 00:07:26.545 ************************************ 00:07:26.545 00:07:26.545 real 0m0.229s 00:07:26.545 user 0m0.082s 00:07:26.545 sys 0m0.091s 00:07:26.545 22:48:05 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.545 22:48:05 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:26.545 22:48:05 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:26.545 22:48:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.545 22:48:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.545 22:48:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.545 ************************************ 00:07:26.545 START TEST nvme_err_injection 00:07:26.545 ************************************ 00:07:26.545 22:48:05 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:26.804 NVMe Error Injection test 00:07:26.804 Attached to 0000:00:13.0 00:07:26.804 Attached to 0000:00:10.0 00:07:26.804 Attached to 0000:00:11.0 00:07:26.804 Attached to 0000:00:12.0 00:07:26.804 0000:00:13.0: get features failed as expected 00:07:26.804 0000:00:10.0: get features failed as expected 00:07:26.804 0000:00:11.0: get features failed as expected 00:07:26.804 0000:00:12.0: get features failed as expected 00:07:26.804 0000:00:13.0: get features successfully as expected 00:07:26.804 0000:00:10.0: get features successfully as expected 00:07:26.804 0000:00:11.0: get features successfully as expected 00:07:26.804 0000:00:12.0: get features successfully as expected 00:07:26.804 0000:00:13.0: read failed as expected 00:07:26.804 0000:00:10.0: read failed as expected 00:07:26.804 0000:00:11.0: read failed as expected 00:07:26.804 0000:00:12.0: read failed as expected 00:07:26.804 0000:00:13.0: read successfully as expected 00:07:26.804 0000:00:10.0: read successfully as expected 00:07:26.804 0000:00:11.0: read successfully as expected 00:07:26.804 0000:00:12.0: read successfully as expected 00:07:26.804 Cleaning up... 00:07:26.804 ************************************ 00:07:26.804 END TEST nvme_err_injection 00:07:26.804 ************************************ 00:07:26.804 00:07:26.804 real 0m0.235s 00:07:26.804 user 0m0.089s 00:07:26.804 sys 0m0.095s 00:07:26.804 22:48:05 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.804 22:48:05 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:26.804 22:48:05 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:26.804 22:48:05 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:26.804 22:48:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.804 22:48:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.804 ************************************ 00:07:26.804 START TEST nvme_overhead 00:07:26.804 ************************************ 00:07:26.804 22:48:05 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:28.178 Initializing NVMe Controllers 00:07:28.178 Attached to 0000:00:13.0 00:07:28.178 Attached to 0000:00:10.0 00:07:28.178 Attached to 0000:00:11.0 00:07:28.178 Attached to 0000:00:12.0 00:07:28.178 Initialization complete. Launching workers. 00:07:28.178 submit (in ns) avg, min, max = 12294.7, 11262.3, 76510.8 00:07:28.178 complete (in ns) avg, min, max = 8297.5, 7813.1, 239338.5 00:07:28.178 00:07:28.178 Submit histogram 00:07:28.178 ================ 00:07:28.178 Range in us Cumulative Count 00:07:28.178 11.225 - 11.274: 0.0063% ( 1) 00:07:28.178 11.471 - 11.520: 0.0127% ( 1) 00:07:28.178 11.520 - 11.569: 0.0190% ( 1) 00:07:28.178 11.569 - 11.618: 0.0570% ( 6) 00:07:28.178 11.618 - 11.668: 0.1583% ( 16) 00:07:28.178 11.668 - 11.717: 0.4179% ( 41) 00:07:28.178 11.717 - 11.766: 1.2221% ( 127) 00:07:28.178 11.766 - 11.815: 3.1408% ( 303) 00:07:28.178 11.815 - 11.865: 6.9909% ( 608) 00:07:28.178 11.865 - 11.914: 14.2287% ( 1143) 00:07:28.178 11.914 - 11.963: 24.0058% ( 1544) 00:07:28.178 11.963 - 12.012: 34.1122% ( 1596) 00:07:28.178 12.012 - 12.062: 45.1558% ( 1744) 00:07:28.178 12.062 - 12.111: 55.9334% ( 1702) 00:07:28.178 12.111 - 12.160: 64.2224% ( 1309) 00:07:28.178 12.160 - 12.209: 70.9347% ( 1060) 00:07:28.178 12.209 - 12.258: 76.1145% ( 818) 00:07:28.178 12.258 - 12.308: 79.9265% ( 602) 00:07:28.178 12.308 - 12.357: 83.0104% ( 487) 00:07:28.178 12.357 - 12.406: 85.6066% ( 410) 00:07:28.178 12.406 - 12.455: 87.6900% ( 329) 00:07:28.178 12.455 - 12.505: 89.6720% ( 313) 00:07:28.178 12.505 - 12.554: 91.1537% ( 234) 00:07:28.178 12.554 - 12.603: 92.5089% ( 214) 00:07:28.178 12.603 - 12.702: 94.3136% ( 285) 00:07:28.178 12.702 - 12.800: 95.4281% ( 176) 00:07:28.178 12.800 - 12.898: 96.2006% ( 122) 00:07:28.178 12.898 - 12.997: 96.5742% ( 59) 00:07:28.178 12.997 - 13.095: 96.7389% ( 26) 00:07:28.178 13.095 - 13.194: 96.8148% ( 12) 00:07:28.178 13.194 - 13.292: 96.9035% ( 14) 00:07:28.178 13.292 - 13.391: 96.9668% ( 10) 00:07:28.178 13.391 - 13.489: 96.9921% ( 4) 00:07:28.178 13.489 - 13.588: 97.0175% ( 4) 00:07:28.178 13.686 - 13.785: 97.0301% ( 2) 00:07:28.178 13.785 - 13.883: 97.0871% ( 9) 00:07:28.178 13.883 - 13.982: 97.1505% ( 10) 00:07:28.178 13.982 - 14.080: 97.2581% ( 17) 00:07:28.178 14.080 - 14.178: 97.3848% ( 20) 00:07:28.178 14.178 - 14.277: 97.4987% ( 18) 00:07:28.178 14.277 - 14.375: 97.6001% ( 16) 00:07:28.178 14.375 - 14.474: 97.7077% ( 17) 00:07:28.178 14.474 - 14.572: 97.7900% ( 13) 00:07:28.178 14.572 - 14.671: 97.8407% ( 8) 00:07:28.178 14.671 - 14.769: 97.8850% ( 7) 00:07:28.178 14.769 - 14.868: 97.9357% ( 8) 00:07:28.178 14.868 - 14.966: 97.9737% ( 6) 00:07:28.178 14.966 - 15.065: 98.0117% ( 6) 00:07:28.178 15.065 - 15.163: 98.0560% ( 7) 00:07:28.178 15.163 - 15.262: 98.0876% ( 5) 00:07:28.178 15.262 - 15.360: 98.0940% ( 1) 00:07:28.178 15.360 - 15.458: 98.1130% ( 3) 00:07:28.178 15.458 - 15.557: 98.1193% ( 1) 00:07:28.178 15.557 - 15.655: 98.1256% ( 1) 00:07:28.178 15.655 - 15.754: 98.1510% ( 4) 00:07:28.178 15.754 - 15.852: 98.1573% ( 1) 00:07:28.178 15.852 - 15.951: 98.1636% ( 1) 00:07:28.178 15.951 - 16.049: 98.1763% ( 2) 00:07:28.178 16.049 - 16.148: 98.2080% ( 5) 00:07:28.178 16.148 - 16.246: 98.2270% ( 3) 00:07:28.178 16.246 - 16.345: 98.2396% ( 2) 00:07:28.178 16.345 - 16.443: 98.2903% ( 8) 00:07:28.178 16.443 - 16.542: 98.3219% ( 5) 00:07:28.178 16.542 - 16.640: 98.3409% ( 3) 00:07:28.178 16.640 - 16.738: 98.3663% ( 4) 00:07:28.178 16.738 - 16.837: 98.3789% ( 2) 00:07:28.178 16.837 - 16.935: 98.3853% ( 1) 00:07:28.179 16.935 - 17.034: 98.3979% ( 2) 00:07:28.179 17.034 - 17.132: 98.4043% ( 1) 00:07:28.179 17.132 - 17.231: 98.4169% ( 2) 00:07:28.179 17.329 - 17.428: 98.4359% ( 3) 00:07:28.179 17.428 - 17.526: 98.4486% ( 2) 00:07:28.179 17.526 - 17.625: 98.4549% ( 1) 00:07:28.179 17.625 - 17.723: 98.4802% ( 4) 00:07:28.179 17.723 - 17.822: 98.5119% ( 5) 00:07:28.179 17.822 - 17.920: 98.5752% ( 10) 00:07:28.179 17.920 - 18.018: 98.6512% ( 12) 00:07:28.179 18.018 - 18.117: 98.6955% ( 7) 00:07:28.179 18.117 - 18.215: 98.7715% ( 12) 00:07:28.179 18.215 - 18.314: 98.8285% ( 9) 00:07:28.179 18.314 - 18.412: 98.9045% ( 12) 00:07:28.179 18.412 - 18.511: 98.9868% ( 13) 00:07:28.179 18.511 - 18.609: 99.1008% ( 18) 00:07:28.179 18.609 - 18.708: 99.1578% ( 9) 00:07:28.179 18.708 - 18.806: 99.2211% ( 10) 00:07:28.179 18.806 - 18.905: 99.2844% ( 10) 00:07:28.179 18.905 - 19.003: 99.3794% ( 15) 00:07:28.179 19.003 - 19.102: 99.4364% ( 9) 00:07:28.179 19.102 - 19.200: 99.4934% ( 9) 00:07:28.179 19.200 - 19.298: 99.5631% ( 11) 00:07:28.179 19.298 - 19.397: 99.5884% ( 4) 00:07:28.179 19.397 - 19.495: 99.5947% ( 1) 00:07:28.179 19.495 - 19.594: 99.6327% ( 6) 00:07:28.179 19.594 - 19.692: 99.6834% ( 8) 00:07:28.179 19.692 - 19.791: 99.7024% ( 3) 00:07:28.179 19.791 - 19.889: 99.7214% ( 3) 00:07:28.179 19.889 - 19.988: 99.7340% ( 2) 00:07:28.179 19.988 - 20.086: 99.7404% ( 1) 00:07:28.179 20.086 - 20.185: 99.7594% ( 3) 00:07:28.179 20.185 - 20.283: 99.7657% ( 1) 00:07:28.179 20.283 - 20.382: 99.7784% ( 2) 00:07:28.179 20.578 - 20.677: 99.7847% ( 1) 00:07:28.179 20.775 - 20.874: 99.7974% ( 2) 00:07:28.179 20.874 - 20.972: 99.8037% ( 1) 00:07:28.179 20.972 - 21.071: 99.8100% ( 1) 00:07:28.179 21.268 - 21.366: 99.8164% ( 1) 00:07:28.179 21.366 - 21.465: 99.8227% ( 1) 00:07:28.179 22.055 - 22.154: 99.8290% ( 1) 00:07:28.179 22.154 - 22.252: 99.8354% ( 1) 00:07:28.179 22.252 - 22.351: 99.8417% ( 1) 00:07:28.179 22.646 - 22.745: 99.8480% ( 1) 00:07:28.179 23.335 - 23.434: 99.8544% ( 1) 00:07:28.179 23.532 - 23.631: 99.8607% ( 1) 00:07:28.179 23.828 - 23.926: 99.8670% ( 1) 00:07:28.179 23.926 - 24.025: 99.8734% ( 1) 00:07:28.179 24.320 - 24.418: 99.8797% ( 1) 00:07:28.179 24.418 - 24.517: 99.8860% ( 1) 00:07:28.179 24.812 - 24.911: 99.8924% ( 1) 00:07:28.179 25.600 - 25.797: 99.8987% ( 1) 00:07:28.179 25.797 - 25.994: 99.9050% ( 1) 00:07:28.179 26.782 - 26.978: 99.9113% ( 1) 00:07:28.179 27.175 - 27.372: 99.9177% ( 1) 00:07:28.179 27.766 - 27.963: 99.9240% ( 1) 00:07:28.179 29.145 - 29.342: 99.9303% ( 1) 00:07:28.179 33.674 - 33.871: 99.9367% ( 1) 00:07:28.179 38.006 - 38.203: 99.9430% ( 1) 00:07:28.179 39.188 - 39.385: 99.9493% ( 1) 00:07:28.179 45.095 - 45.292: 99.9557% ( 1) 00:07:28.179 46.080 - 46.277: 99.9620% ( 1) 00:07:28.179 47.655 - 47.852: 99.9683% ( 1) 00:07:28.179 48.837 - 49.034: 99.9747% ( 1) 00:07:28.179 51.988 - 52.382: 99.9810% ( 1) 00:07:28.179 57.895 - 58.289: 99.9873% ( 1) 00:07:28.179 59.865 - 60.258: 99.9937% ( 1) 00:07:28.179 76.406 - 76.800: 100.0000% ( 1) 00:07:28.179 00:07:28.179 Complete histogram 00:07:28.179 ================== 00:07:28.179 Range in us Cumulative Count 00:07:28.179 7.778 - 7.828: 0.0127% ( 2) 00:07:28.179 7.828 - 7.877: 0.0887% ( 12) 00:07:28.179 7.877 - 7.926: 1.7161% ( 257) 00:07:28.179 7.926 - 7.975: 9.5111% ( 1231) 00:07:28.179 7.975 - 8.025: 24.9050% ( 2431) 00:07:28.179 8.025 - 8.074: 39.0324% ( 2231) 00:07:28.179 8.074 - 8.123: 49.4618% ( 1647) 00:07:28.179 8.123 - 8.172: 60.7586% ( 1784) 00:07:28.179 8.172 - 8.222: 71.9985% ( 1775) 00:07:28.179 8.222 - 8.271: 80.4015% ( 1327) 00:07:28.179 8.271 - 8.320: 86.2335% ( 921) 00:07:28.179 8.320 - 8.369: 90.6282% ( 694) 00:07:28.179 8.369 - 8.418: 93.4460% ( 445) 00:07:28.179 8.418 - 8.468: 95.2128% ( 279) 00:07:28.179 8.468 - 8.517: 96.4096% ( 189) 00:07:28.179 8.517 - 8.566: 97.0428% ( 100) 00:07:28.179 8.566 - 8.615: 97.5051% ( 73) 00:07:28.179 8.615 - 8.665: 97.7394% ( 37) 00:07:28.179 8.665 - 8.714: 97.8533% ( 18) 00:07:28.179 8.714 - 8.763: 97.9673% ( 18) 00:07:28.179 8.763 - 8.812: 98.0433% ( 12) 00:07:28.179 8.812 - 8.862: 98.0813% ( 6) 00:07:28.179 8.862 - 8.911: 98.1066% ( 4) 00:07:28.179 8.911 - 8.960: 98.1130% ( 1) 00:07:28.179 8.960 - 9.009: 98.1320% ( 3) 00:07:28.179 9.009 - 9.058: 98.1573% ( 4) 00:07:28.179 9.108 - 9.157: 98.1763% ( 3) 00:07:28.179 9.157 - 9.206: 98.1890% ( 2) 00:07:28.179 9.255 - 9.305: 98.2016% ( 2) 00:07:28.179 9.305 - 9.354: 98.2080% ( 1) 00:07:28.179 9.354 - 9.403: 98.2206% ( 2) 00:07:28.179 9.403 - 9.452: 98.2270% ( 1) 00:07:28.179 9.502 - 9.551: 98.2333% ( 1) 00:07:28.179 9.551 - 9.600: 98.2459% ( 2) 00:07:28.179 9.600 - 9.649: 98.2523% ( 1) 00:07:28.179 9.698 - 9.748: 98.2586% ( 1) 00:07:28.179 9.895 - 9.945: 98.2649% ( 1) 00:07:28.179 9.945 - 9.994: 98.2713% ( 1) 00:07:28.179 9.994 - 10.043: 98.2776% ( 1) 00:07:28.179 10.043 - 10.092: 98.2839% ( 1) 00:07:28.179 10.092 - 10.142: 98.2903% ( 1) 00:07:28.179 10.142 - 10.191: 98.2966% ( 1) 00:07:28.179 10.289 - 10.338: 98.3029% ( 1) 00:07:28.179 10.535 - 10.585: 98.3093% ( 1) 00:07:28.179 11.372 - 11.422: 98.3219% ( 2) 00:07:28.179 11.422 - 11.471: 98.3283% ( 1) 00:07:28.179 11.618 - 11.668: 98.3346% ( 1) 00:07:28.179 11.914 - 11.963: 98.3409% ( 1) 00:07:28.179 12.111 - 12.160: 98.3473% ( 1) 00:07:28.179 12.160 - 12.209: 98.3536% ( 1) 00:07:28.179 12.406 - 12.455: 98.3599% ( 1) 00:07:28.179 12.505 - 12.554: 98.3663% ( 1) 00:07:28.179 12.603 - 12.702: 98.3726% ( 1) 00:07:28.179 12.702 - 12.800: 98.3789% ( 1) 00:07:28.179 12.800 - 12.898: 98.3853% ( 1) 00:07:28.179 12.898 - 12.997: 98.3979% ( 2) 00:07:28.179 12.997 - 13.095: 98.4106% ( 2) 00:07:28.179 13.095 - 13.194: 98.4169% ( 1) 00:07:28.179 13.194 - 13.292: 98.4296% ( 2) 00:07:28.179 13.292 - 13.391: 98.4486% ( 3) 00:07:28.179 13.391 - 13.489: 98.4549% ( 1) 00:07:28.179 13.489 - 13.588: 98.5056% ( 8) 00:07:28.179 13.588 - 13.686: 98.5372% ( 5) 00:07:28.179 13.686 - 13.785: 98.5816% ( 7) 00:07:28.179 13.785 - 13.883: 98.6259% ( 7) 00:07:28.179 13.883 - 13.982: 98.6702% ( 7) 00:07:28.179 13.982 - 14.080: 98.7082% ( 6) 00:07:28.179 14.080 - 14.178: 98.7969% ( 14) 00:07:28.179 14.178 - 14.277: 98.8159% ( 3) 00:07:28.179 14.277 - 14.375: 98.8539% ( 6) 00:07:28.179 14.375 - 14.474: 98.9045% ( 8) 00:07:28.179 14.474 - 14.572: 99.0122% ( 17) 00:07:28.179 14.572 - 14.671: 99.0755% ( 10) 00:07:28.179 14.671 - 14.769: 99.1515% ( 12) 00:07:28.179 14.769 - 14.868: 99.2085% ( 9) 00:07:28.179 14.868 - 14.966: 99.2718% ( 10) 00:07:28.179 14.966 - 15.065: 99.3098% ( 6) 00:07:28.179 15.065 - 15.163: 99.3541% ( 7) 00:07:28.179 15.163 - 15.262: 99.3984% ( 7) 00:07:28.179 15.262 - 15.360: 99.4554% ( 9) 00:07:28.179 15.360 - 15.458: 99.5124% ( 9) 00:07:28.179 15.458 - 15.557: 99.5377% ( 4) 00:07:28.179 15.557 - 15.655: 99.5567% ( 3) 00:07:28.179 15.655 - 15.754: 99.6011% ( 7) 00:07:28.179 15.754 - 15.852: 99.6137% ( 2) 00:07:28.179 15.852 - 15.951: 99.6391% ( 4) 00:07:28.179 15.951 - 16.049: 99.6517% ( 2) 00:07:28.179 16.049 - 16.148: 99.6644% ( 2) 00:07:28.179 16.148 - 16.246: 99.6707% ( 1) 00:07:28.179 16.345 - 16.443: 99.6834% ( 2) 00:07:28.179 16.542 - 16.640: 99.6960% ( 2) 00:07:28.179 16.640 - 16.738: 99.7024% ( 1) 00:07:28.179 16.738 - 16.837: 99.7214% ( 3) 00:07:28.179 16.837 - 16.935: 99.7277% ( 1) 00:07:28.179 17.329 - 17.428: 99.7340% ( 1) 00:07:28.179 17.625 - 17.723: 99.7404% ( 1) 00:07:28.179 17.723 - 17.822: 99.7467% ( 1) 00:07:28.179 17.822 - 17.920: 99.7594% ( 2) 00:07:28.179 18.018 - 18.117: 99.7657% ( 1) 00:07:28.179 18.117 - 18.215: 99.7784% ( 2) 00:07:28.179 18.215 - 18.314: 99.7847% ( 1) 00:07:28.179 18.314 - 18.412: 99.7910% ( 1) 00:07:28.179 18.412 - 18.511: 99.7974% ( 1) 00:07:28.179 18.511 - 18.609: 99.8100% ( 2) 00:07:28.179 18.609 - 18.708: 99.8290% ( 3) 00:07:28.179 18.708 - 18.806: 99.8354% ( 1) 00:07:28.179 18.806 - 18.905: 99.8480% ( 2) 00:07:28.179 18.905 - 19.003: 99.8544% ( 1) 00:07:28.179 19.102 - 19.200: 99.8607% ( 1) 00:07:28.179 19.200 - 19.298: 99.8670% ( 1) 00:07:28.179 19.594 - 19.692: 99.8734% ( 1) 00:07:28.179 19.692 - 19.791: 99.8797% ( 1) 00:07:28.179 20.283 - 20.382: 99.8924% ( 2) 00:07:28.179 20.578 - 20.677: 99.9050% ( 2) 00:07:28.179 20.775 - 20.874: 99.9113% ( 1) 00:07:28.179 21.169 - 21.268: 99.9177% ( 1) 00:07:28.179 22.055 - 22.154: 99.9240% ( 1) 00:07:28.179 22.548 - 22.646: 99.9303% ( 1) 00:07:28.179 22.646 - 22.745: 99.9367% ( 1) 00:07:28.179 24.320 - 24.418: 99.9430% ( 1) 00:07:28.179 25.600 - 25.797: 99.9493% ( 1) 00:07:28.179 27.175 - 27.372: 99.9557% ( 1) 00:07:28.179 32.689 - 32.886: 99.9620% ( 1) 00:07:28.180 33.871 - 34.068: 99.9683% ( 1) 00:07:28.180 35.249 - 35.446: 99.9747% ( 1) 00:07:28.180 42.535 - 42.732: 99.9810% ( 1) 00:07:28.180 47.458 - 47.655: 99.9873% ( 1) 00:07:28.180 198.498 - 199.286: 99.9937% ( 1) 00:07:28.180 237.883 - 239.458: 100.0000% ( 1) 00:07:28.180 00:07:28.180 ************************************ 00:07:28.180 END TEST nvme_overhead 00:07:28.180 ************************************ 00:07:28.180 00:07:28.180 real 0m1.226s 00:07:28.180 user 0m1.078s 00:07:28.180 sys 0m0.095s 00:07:28.180 22:48:07 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.180 22:48:07 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:28.180 22:48:07 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:28.180 22:48:07 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:28.180 22:48:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.180 22:48:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.180 ************************************ 00:07:28.180 START TEST nvme_arbitration 00:07:28.180 ************************************ 00:07:28.180 22:48:07 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:31.472 Initializing NVMe Controllers 00:07:31.472 Attached to 0000:00:13.0 00:07:31.472 Attached to 0000:00:10.0 00:07:31.472 Attached to 0000:00:11.0 00:07:31.472 Attached to 0000:00:12.0 00:07:31.472 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:31.472 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:31.472 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:31.472 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:31.472 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:31.472 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:31.472 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:31.472 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:31.472 Initialization complete. Launching workers. 00:07:31.472 Starting thread on core 1 with urgent priority queue 00:07:31.472 Starting thread on core 2 with urgent priority queue 00:07:31.472 Starting thread on core 3 with urgent priority queue 00:07:31.472 Starting thread on core 0 with urgent priority queue 00:07:31.472 QEMU NVMe Ctrl (12343 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:31.472 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:31.472 QEMU NVMe Ctrl (12340 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:31.472 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:31.472 QEMU NVMe Ctrl (12341 ) core 2: 874.67 IO/s 114.33 secs/100000 ios 00:07:31.472 QEMU NVMe Ctrl (12342 ) core 3: 810.67 IO/s 123.36 secs/100000 ios 00:07:31.472 ======================================================== 00:07:31.472 00:07:31.472 ************************************ 00:07:31.472 END TEST nvme_arbitration 00:07:31.472 ************************************ 00:07:31.472 00:07:31.472 real 0m3.301s 00:07:31.472 user 0m9.276s 00:07:31.472 sys 0m0.111s 00:07:31.472 22:48:10 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.472 22:48:10 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:31.472 22:48:10 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:31.472 22:48:10 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:31.472 22:48:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.472 22:48:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.472 ************************************ 00:07:31.472 START TEST nvme_single_aen 00:07:31.472 ************************************ 00:07:31.472 22:48:10 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:31.730 Asynchronous Event Request test 00:07:31.730 Attached to 0000:00:13.0 00:07:31.730 Attached to 0000:00:10.0 00:07:31.730 Attached to 0000:00:11.0 00:07:31.731 Attached to 0000:00:12.0 00:07:31.731 Reset controller to setup AER completions for this process 00:07:31.731 Registering asynchronous event callbacks... 00:07:31.731 Getting orig temperature thresholds of all controllers 00:07:31.731 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:31.731 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:31.731 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:31.731 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:31.731 Setting all controllers temperature threshold low to trigger AER 00:07:31.731 Waiting for all controllers temperature threshold to be set lower 00:07:31.731 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:31.731 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:31.731 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:31.731 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:31.731 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:31.731 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:31.731 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:31.731 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:31.731 Waiting for all controllers to trigger AER and reset threshold 00:07:31.731 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.731 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.731 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.731 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.731 Cleaning up... 00:07:31.731 00:07:31.731 real 0m0.223s 00:07:31.731 user 0m0.075s 00:07:31.731 sys 0m0.104s 00:07:31.731 22:48:10 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.731 22:48:10 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:31.731 ************************************ 00:07:31.731 END TEST nvme_single_aen 00:07:31.731 ************************************ 00:07:31.731 22:48:10 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:31.731 22:48:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.731 22:48:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.731 22:48:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.731 ************************************ 00:07:31.731 START TEST nvme_doorbell_aers 00:07:31.731 ************************************ 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:31.731 22:48:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:31.989 [2024-12-13 22:48:10.924224] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:07:41.952 Executing: test_write_invalid_db 00:07:41.952 Waiting for AER completion... 00:07:41.952 Failure: test_write_invalid_db 00:07:41.952 00:07:41.952 Executing: test_invalid_db_write_overflow_sq 00:07:41.952 Waiting for AER completion... 00:07:41.952 Failure: test_invalid_db_write_overflow_sq 00:07:41.952 00:07:41.952 Executing: test_invalid_db_write_overflow_cq 00:07:41.952 Waiting for AER completion... 00:07:41.952 Failure: test_invalid_db_write_overflow_cq 00:07:41.952 00:07:41.952 22:48:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:41.952 22:48:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:41.952 [2024-12-13 22:48:20.993005] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:07:51.977 Executing: test_write_invalid_db 00:07:51.977 Waiting for AER completion... 00:07:51.977 Failure: test_write_invalid_db 00:07:51.977 00:07:51.977 Executing: test_invalid_db_write_overflow_sq 00:07:51.977 Waiting for AER completion... 00:07:51.977 Failure: test_invalid_db_write_overflow_sq 00:07:51.977 00:07:51.977 Executing: test_invalid_db_write_overflow_cq 00:07:51.977 Waiting for AER completion... 00:07:51.977 Failure: test_invalid_db_write_overflow_cq 00:07:51.977 00:07:51.977 22:48:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:51.977 22:48:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:51.977 [2024-12-13 22:48:31.020988] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:02.008 Executing: test_write_invalid_db 00:08:02.008 Waiting for AER completion... 00:08:02.008 Failure: test_write_invalid_db 00:08:02.008 00:08:02.008 Executing: test_invalid_db_write_overflow_sq 00:08:02.008 Waiting for AER completion... 00:08:02.008 Failure: test_invalid_db_write_overflow_sq 00:08:02.008 00:08:02.008 Executing: test_invalid_db_write_overflow_cq 00:08:02.008 Waiting for AER completion... 00:08:02.008 Failure: test_invalid_db_write_overflow_cq 00:08:02.008 00:08:02.008 22:48:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:02.008 22:48:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:02.008 [2024-12-13 22:48:41.046724] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 Executing: test_write_invalid_db 00:08:11.974 Waiting for AER completion... 00:08:11.974 Failure: test_write_invalid_db 00:08:11.974 00:08:11.974 Executing: test_invalid_db_write_overflow_sq 00:08:11.974 Waiting for AER completion... 00:08:11.974 Failure: test_invalid_db_write_overflow_sq 00:08:11.974 00:08:11.974 Executing: test_invalid_db_write_overflow_cq 00:08:11.974 Waiting for AER completion... 00:08:11.974 Failure: test_invalid_db_write_overflow_cq 00:08:11.974 00:08:11.974 00:08:11.974 real 0m40.204s 00:08:11.974 user 0m34.223s 00:08:11.974 sys 0m5.582s 00:08:11.974 22:48:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.974 22:48:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:11.974 ************************************ 00:08:11.974 END TEST nvme_doorbell_aers 00:08:11.974 ************************************ 00:08:11.974 22:48:50 nvme -- nvme/nvme.sh@97 -- # uname 00:08:11.974 22:48:50 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:11.974 22:48:50 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:11.974 22:48:50 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:11.974 22:48:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.974 22:48:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.974 ************************************ 00:08:11.974 START TEST nvme_multi_aen 00:08:11.974 ************************************ 00:08:11.974 22:48:50 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:11.974 [2024-12-13 22:48:51.094043] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 [2024-12-13 22:48:51.094104] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 [2024-12-13 22:48:51.094119] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 [2024-12-13 22:48:51.096059] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 [2024-12-13 22:48:51.096101] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 [2024-12-13 22:48:51.096112] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 [2024-12-13 22:48:51.097671] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 [2024-12-13 22:48:51.097712] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 [2024-12-13 22:48:51.097723] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 [2024-12-13 22:48:51.099162] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 [2024-12-13 22:48:51.099218] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 [2024-12-13 22:48:51.099229] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65001) is not found. Dropping the request. 00:08:11.974 Child process pid: 65527 00:08:12.234 [Child] Asynchronous Event Request test 00:08:12.234 [Child] Attached to 0000:00:13.0 00:08:12.234 [Child] Attached to 0000:00:10.0 00:08:12.234 [Child] Attached to 0000:00:11.0 00:08:12.234 [Child] Attached to 0000:00:12.0 00:08:12.234 [Child] Registering asynchronous event callbacks... 00:08:12.234 [Child] Getting orig temperature thresholds of all controllers 00:08:12.234 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.235 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.235 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.235 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.235 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:12.235 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.235 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.235 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.235 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.235 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.235 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.235 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.235 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.235 [Child] Cleaning up... 00:08:12.235 Asynchronous Event Request test 00:08:12.235 Attached to 0000:00:13.0 00:08:12.235 Attached to 0000:00:10.0 00:08:12.235 Attached to 0000:00:11.0 00:08:12.235 Attached to 0000:00:12.0 00:08:12.235 Reset controller to setup AER completions for this process 00:08:12.235 Registering asynchronous event callbacks... 00:08:12.235 Getting orig temperature thresholds of all controllers 00:08:12.235 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.235 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.235 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.235 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:12.235 Setting all controllers temperature threshold low to trigger AER 00:08:12.235 Waiting for all controllers temperature threshold to be set lower 00:08:12.235 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.235 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:12.235 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.235 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:12.235 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.235 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:12.235 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:12.235 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:12.235 Waiting for all controllers to trigger AER and reset threshold 00:08:12.235 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.235 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.235 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.235 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.235 Cleaning up... 00:08:12.235 00:08:12.235 real 0m0.433s 00:08:12.235 user 0m0.150s 00:08:12.235 sys 0m0.192s 00:08:12.235 22:48:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.235 22:48:51 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:12.235 ************************************ 00:08:12.235 END TEST nvme_multi_aen 00:08:12.235 ************************************ 00:08:12.493 22:48:51 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:12.493 22:48:51 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:12.493 22:48:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.493 22:48:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.493 ************************************ 00:08:12.493 START TEST nvme_startup 00:08:12.493 ************************************ 00:08:12.493 22:48:51 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:12.493 Initializing NVMe Controllers 00:08:12.493 Attached to 0000:00:13.0 00:08:12.493 Attached to 0000:00:10.0 00:08:12.493 Attached to 0000:00:11.0 00:08:12.493 Attached to 0000:00:12.0 00:08:12.493 Initialization complete. 00:08:12.493 Time used:139272.172 (us). 00:08:12.493 00:08:12.493 real 0m0.208s 00:08:12.493 user 0m0.069s 00:08:12.493 sys 0m0.097s 00:08:12.493 22:48:51 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.493 22:48:51 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:12.493 ************************************ 00:08:12.493 END TEST nvme_startup 00:08:12.493 ************************************ 00:08:12.751 22:48:51 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:12.751 22:48:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.751 22:48:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.751 22:48:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.751 ************************************ 00:08:12.751 START TEST nvme_multi_secondary 00:08:12.751 ************************************ 00:08:12.751 22:48:51 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:12.751 22:48:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65577 00:08:12.751 22:48:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:12.751 22:48:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65578 00:08:12.751 22:48:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:12.751 22:48:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:16.031 Initializing NVMe Controllers 00:08:16.031 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:16.031 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:16.031 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:16.031 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:16.031 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:16.031 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:16.031 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:16.031 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:16.031 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:16.031 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:16.031 Initialization complete. Launching workers. 00:08:16.031 ======================================================== 00:08:16.031 Latency(us) 00:08:16.031 Device Information : IOPS MiB/s Average min max 00:08:16.031 PCIE (0000:00:13.0) NSID 1 from core 1: 7680.97 30.00 2082.66 738.56 6641.90 00:08:16.031 PCIE (0000:00:10.0) NSID 1 from core 1: 7680.97 30.00 2081.85 709.93 6892.77 00:08:16.031 PCIE (0000:00:11.0) NSID 1 from core 1: 7680.97 30.00 2082.91 725.96 6674.36 00:08:16.031 PCIE (0000:00:12.0) NSID 1 from core 1: 7680.97 30.00 2082.95 720.39 6531.00 00:08:16.031 PCIE (0000:00:12.0) NSID 2 from core 1: 7680.97 30.00 2082.98 718.85 6228.91 00:08:16.031 PCIE (0000:00:12.0) NSID 3 from core 1: 7680.97 30.00 2083.02 732.22 6556.89 00:08:16.031 ======================================================== 00:08:16.031 Total : 46085.84 180.02 2082.73 709.93 6892.77 00:08:16.031 00:08:16.031 Initializing NVMe Controllers 00:08:16.031 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:16.031 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:16.031 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:16.031 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:16.031 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:16.031 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:16.031 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:16.031 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:16.031 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:16.031 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:16.031 Initialization complete. Launching workers. 00:08:16.031 ======================================================== 00:08:16.031 Latency(us) 00:08:16.031 Device Information : IOPS MiB/s Average min max 00:08:16.031 PCIE (0000:00:13.0) NSID 1 from core 2: 3137.74 12.26 5095.42 981.46 12999.26 00:08:16.031 PCIE (0000:00:10.0) NSID 1 from core 2: 3137.74 12.26 5091.26 1294.30 17139.15 00:08:16.031 PCIE (0000:00:11.0) NSID 1 from core 2: 3137.74 12.26 5091.59 1186.24 17394.38 00:08:16.031 PCIE (0000:00:12.0) NSID 1 from core 2: 3137.74 12.26 5091.98 1214.28 12486.50 00:08:16.031 PCIE (0000:00:12.0) NSID 2 from core 2: 3137.74 12.26 5091.53 984.16 12565.42 00:08:16.031 PCIE (0000:00:12.0) NSID 3 from core 2: 3137.74 12.26 5091.94 801.20 13032.37 00:08:16.031 ======================================================== 00:08:16.031 Total : 18826.46 73.54 5092.29 801.20 17394.38 00:08:16.031 00:08:16.031 22:48:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65577 00:08:17.929 Initializing NVMe Controllers 00:08:17.929 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:17.929 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:17.929 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:17.929 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:17.929 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:17.929 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:17.929 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:17.929 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:17.929 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:17.929 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:17.929 Initialization complete. Launching workers. 00:08:17.929 ======================================================== 00:08:17.929 Latency(us) 00:08:17.929 Device Information : IOPS MiB/s Average min max 00:08:17.929 PCIE (0000:00:13.0) NSID 1 from core 0: 10179.11 39.76 1571.48 745.39 8366.01 00:08:17.929 PCIE (0000:00:10.0) NSID 1 from core 0: 10179.11 39.76 1570.61 722.28 7911.48 00:08:17.929 PCIE (0000:00:11.0) NSID 1 from core 0: 10179.11 39.76 1571.42 738.50 7661.36 00:08:17.929 PCIE (0000:00:12.0) NSID 1 from core 0: 10179.11 39.76 1571.38 658.49 8679.85 00:08:17.929 PCIE (0000:00:12.0) NSID 2 from core 0: 10179.11 39.76 1571.36 640.76 9581.27 00:08:17.929 PCIE (0000:00:12.0) NSID 3 from core 0: 10179.11 39.76 1571.33 617.90 9016.40 00:08:17.929 ======================================================== 00:08:17.929 Total : 61074.64 238.57 1571.27 617.90 9581.27 00:08:17.929 00:08:17.929 22:48:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65578 00:08:17.929 22:48:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65653 00:08:17.930 22:48:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:17.930 22:48:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65654 00:08:17.930 22:48:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:17.930 22:48:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:21.224 Initializing NVMe Controllers 00:08:21.224 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:21.224 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.224 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.224 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.224 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:21.224 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:21.224 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:21.224 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:21.224 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:21.224 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:21.224 Initialization complete. Launching workers. 00:08:21.224 ======================================================== 00:08:21.224 Latency(us) 00:08:21.224 Device Information : IOPS MiB/s Average min max 00:08:21.224 PCIE (0000:00:13.0) NSID 1 from core 1: 4970.56 19.42 3218.56 716.24 13197.09 00:08:21.224 PCIE (0000:00:10.0) NSID 1 from core 1: 4970.56 19.42 3217.96 697.93 13256.59 00:08:21.224 PCIE (0000:00:11.0) NSID 1 from core 1: 4970.56 19.42 3219.08 717.38 13680.25 00:08:21.224 PCIE (0000:00:12.0) NSID 1 from core 1: 4970.56 19.42 3220.00 708.65 12135.41 00:08:21.224 PCIE (0000:00:12.0) NSID 2 from core 1: 4970.56 19.42 3221.70 702.47 12526.15 00:08:21.224 PCIE (0000:00:12.0) NSID 3 from core 1: 4970.56 19.42 3221.70 715.49 12590.04 00:08:21.224 ======================================================== 00:08:21.224 Total : 29823.37 116.50 3219.83 697.93 13680.25 00:08:21.224 00:08:21.224 Initializing NVMe Controllers 00:08:21.224 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:21.224 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.224 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.224 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.224 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:21.224 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:21.224 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:21.224 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:21.224 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:21.224 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:21.224 Initialization complete. Launching workers. 00:08:21.224 ======================================================== 00:08:21.224 Latency(us) 00:08:21.224 Device Information : IOPS MiB/s Average min max 00:08:21.224 PCIE (0000:00:13.0) NSID 1 from core 0: 3634.29 14.20 4401.91 770.38 11663.68 00:08:21.224 PCIE (0000:00:10.0) NSID 1 from core 0: 3634.29 14.20 4401.32 757.95 11851.26 00:08:21.224 PCIE (0000:00:11.0) NSID 1 from core 0: 3634.29 14.20 4402.35 767.99 12020.40 00:08:21.224 PCIE (0000:00:12.0) NSID 1 from core 0: 3634.29 14.20 4402.19 786.79 12119.92 00:08:21.224 PCIE (0000:00:12.0) NSID 2 from core 0: 3634.29 14.20 4402.05 788.70 11425.33 00:08:21.224 PCIE (0000:00:12.0) NSID 3 from core 0: 3634.29 14.20 4401.95 780.73 13415.33 00:08:21.224 ======================================================== 00:08:21.224 Total : 21805.75 85.18 4401.96 757.95 13415.33 00:08:21.224 00:08:23.771 Initializing NVMe Controllers 00:08:23.771 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.771 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.771 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:23.771 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:23.771 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:23.771 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:23.771 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:23.771 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:23.771 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:23.771 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:23.771 Initialization complete. Launching workers. 00:08:23.771 ======================================================== 00:08:23.771 Latency(us) 00:08:23.771 Device Information : IOPS MiB/s Average min max 00:08:23.771 PCIE (0000:00:13.0) NSID 1 from core 2: 1998.16 7.81 8006.91 1126.59 27970.85 00:08:23.771 PCIE (0000:00:10.0) NSID 1 from core 2: 1998.16 7.81 8005.07 1098.30 26443.97 00:08:23.771 PCIE (0000:00:11.0) NSID 1 from core 2: 1998.16 7.81 8007.14 1106.24 29344.28 00:08:23.771 PCIE (0000:00:12.0) NSID 1 from core 2: 1998.16 7.81 8006.20 957.39 33613.69 00:08:23.771 PCIE (0000:00:12.0) NSID 2 from core 2: 1998.16 7.81 8006.86 1111.40 28535.60 00:08:23.771 PCIE (0000:00:12.0) NSID 3 from core 2: 1998.16 7.81 8006.70 1151.65 34385.55 00:08:23.771 ======================================================== 00:08:23.771 Total : 11988.95 46.83 8006.48 957.39 34385.55 00:08:23.771 00:08:23.771 22:49:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65653 00:08:23.771 22:49:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65654 00:08:23.771 00:08:23.771 real 0m10.763s 00:08:23.771 user 0m18.326s 00:08:23.771 sys 0m0.794s 00:08:23.771 ************************************ 00:08:23.771 END TEST nvme_multi_secondary 00:08:23.771 22:49:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.771 22:49:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:23.771 ************************************ 00:08:23.771 22:49:02 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:23.771 22:49:02 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:23.771 22:49:02 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64598 ]] 00:08:23.771 22:49:02 nvme -- common/autotest_common.sh@1094 -- # kill 64598 00:08:23.771 22:49:02 nvme -- common/autotest_common.sh@1095 -- # wait 64598 00:08:23.771 [2024-12-13 22:49:02.452034] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.452103] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.452133] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.452151] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.454663] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.454699] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.454710] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.454721] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.456572] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.456607] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.456617] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.456627] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.458444] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.458481] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.458491] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.458502] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65526) is not found. Dropping the request. 00:08:23.771 [2024-12-13 22:49:02.589233] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:23.771 22:49:02 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:23.771 22:49:02 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:23.771 22:49:02 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:23.771 22:49:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.771 22:49:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.771 22:49:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.771 ************************************ 00:08:23.771 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:23.771 ************************************ 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:23.771 * Looking for test storage... 00:08:23.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:23.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.771 --rc genhtml_branch_coverage=1 00:08:23.771 --rc genhtml_function_coverage=1 00:08:23.771 --rc genhtml_legend=1 00:08:23.771 --rc geninfo_all_blocks=1 00:08:23.771 --rc geninfo_unexecuted_blocks=1 00:08:23.771 00:08:23.771 ' 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:23.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.771 --rc genhtml_branch_coverage=1 00:08:23.771 --rc genhtml_function_coverage=1 00:08:23.771 --rc genhtml_legend=1 00:08:23.771 --rc geninfo_all_blocks=1 00:08:23.771 --rc geninfo_unexecuted_blocks=1 00:08:23.771 00:08:23.771 ' 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:23.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.771 --rc genhtml_branch_coverage=1 00:08:23.771 --rc genhtml_function_coverage=1 00:08:23.771 --rc genhtml_legend=1 00:08:23.771 --rc geninfo_all_blocks=1 00:08:23.771 --rc geninfo_unexecuted_blocks=1 00:08:23.771 00:08:23.771 ' 00:08:23.771 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:23.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.772 --rc genhtml_branch_coverage=1 00:08:23.772 --rc genhtml_function_coverage=1 00:08:23.772 --rc genhtml_legend=1 00:08:23.772 --rc geninfo_all_blocks=1 00:08:23.772 --rc geninfo_unexecuted_blocks=1 00:08:23.772 00:08:23.772 ' 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65811 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65811 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65811 ']' 00:08:23.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.772 22:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:23.772 [2024-12-13 22:49:02.907825] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:23.772 [2024-12-13 22:49:02.907961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65811 ] 00:08:24.033 [2024-12-13 22:49:03.083392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.293 [2024-12-13 22:49:03.206320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.293 [2024-12-13 22:49:03.206525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.293 [2024-12-13 22:49:03.206889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.293 [2024-12-13 22:49:03.206908] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:24.865 nvme0n1 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_fRmF4.txt 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:24.865 true 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734130143 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65834 00:08:24.865 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:24.866 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:24.866 22:49:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:27.409 22:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:27.409 22:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.409 22:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:27.409 [2024-12-13 22:49:05.969515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:27.409 [2024-12-13 22:49:05.969891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:27.409 [2024-12-13 22:49:05.969926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:27.409 [2024-12-13 22:49:05.969943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.409 [2024-12-13 22:49:05.973688] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:27.409 22:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.409 22:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65834 00:08:27.409 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65834 00:08:27.409 22:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65834 00:08:27.409 22:49:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_fRmF4.txt 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_fRmF4.txt 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65811 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65811 ']' 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65811 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65811 00:08:27.409 killing process with pid 65811 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65811' 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65811 00:08:27.409 22:49:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65811 00:08:28.348 22:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:28.348 22:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:28.348 ************************************ 00:08:28.348 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:28.348 ************************************ 00:08:28.348 00:08:28.348 real 0m4.737s 00:08:28.348 user 0m16.617s 00:08:28.348 sys 0m0.594s 00:08:28.348 22:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.348 22:49:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:28.348 22:49:07 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:28.348 22:49:07 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:28.348 22:49:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.348 22:49:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.348 22:49:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.348 ************************************ 00:08:28.348 START TEST nvme_fio 00:08:28.348 ************************************ 00:08:28.348 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:28.348 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:28.348 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:28.348 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:28.348 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:28.348 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:28.348 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:28.348 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:28.348 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:28.348 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:28.348 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:28.348 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:28.348 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:28.348 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:28.348 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:28.348 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:28.609 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:28.609 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:28.870 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:28.870 22:49:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:28.870 22:49:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:29.130 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:29.130 fio-3.35 00:08:29.130 Starting 1 thread 00:08:34.423 00:08:34.423 test: (groupid=0, jobs=1): err= 0: pid=65975: Fri Dec 13 22:49:12 2024 00:08:34.423 read: IOPS=18.0k, BW=70.2MiB/s (73.6MB/s)(141MiB/2001msec) 00:08:34.423 slat (usec): min=4, max=216, avg= 5.91, stdev= 3.29 00:08:34.423 clat (usec): min=480, max=9616, avg=3525.09, stdev=1245.56 00:08:34.423 lat (usec): min=485, max=9621, avg=3531.00, stdev=1246.93 00:08:34.423 clat percentiles (usec): 00:08:34.423 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2638], 00:08:34.423 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 3032], 60.00th=[ 3261], 00:08:34.423 | 70.00th=[ 3654], 80.00th=[ 4293], 90.00th=[ 5407], 95.00th=[ 6325], 00:08:34.423 | 99.00th=[ 7635], 99.50th=[ 7963], 99.90th=[ 8848], 99.95th=[ 9241], 00:08:34.423 | 99.99th=[ 9503] 00:08:34.423 bw ( KiB/s): min=70304, max=73936, per=99.90%, avg=71845.33, stdev=1877.28, samples=3 00:08:34.423 iops : min=17576, max=18484, avg=17961.33, stdev=469.32, samples=3 00:08:34.423 write: IOPS=18.0k, BW=70.3MiB/s (73.7MB/s)(141MiB/2001msec); 0 zone resets 00:08:34.423 slat (nsec): min=4297, max=76295, avg=6048.47, stdev=3032.90 00:08:34.423 clat (usec): min=489, max=9673, avg=3563.63, stdev=1259.30 00:08:34.423 lat (usec): min=494, max=9688, avg=3569.68, stdev=1260.61 00:08:34.423 clat percentiles (usec): 00:08:34.423 | 1.00th=[ 2180], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2671], 00:08:34.423 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 3064], 60.00th=[ 3294], 00:08:34.423 | 70.00th=[ 3687], 80.00th=[ 4359], 90.00th=[ 5538], 95.00th=[ 6325], 00:08:34.423 | 99.00th=[ 7701], 99.50th=[ 7963], 99.90th=[ 8717], 99.95th=[ 8979], 00:08:34.423 | 99.99th=[ 9372] 00:08:34.423 bw ( KiB/s): min=70280, max=74344, per=99.74%, avg=71776.00, stdev=2234.03, samples=3 00:08:34.423 iops : min=17570, max=18586, avg=17944.00, stdev=558.51, samples=3 00:08:34.423 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:08:34.423 lat (msec) : 2=0.35%, 4=75.38%, 10=24.27% 00:08:34.423 cpu : usr=98.50%, sys=0.45%, ctx=7, majf=0, minf=607 00:08:34.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:34.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:34.423 issued rwts: total=35977,36001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:34.423 00:08:34.423 Run status group 0 (all jobs): 00:08:34.423 READ: bw=70.2MiB/s (73.6MB/s), 70.2MiB/s-70.2MiB/s (73.6MB/s-73.6MB/s), io=141MiB (147MB), run=2001-2001msec 00:08:34.423 WRITE: bw=70.3MiB/s (73.7MB/s), 70.3MiB/s-70.3MiB/s (73.7MB/s-73.7MB/s), io=141MiB (147MB), run=2001-2001msec 00:08:34.423 ----------------------------------------------------- 00:08:34.423 Suppressions used: 00:08:34.423 count bytes template 00:08:34.423 1 32 /usr/src/fio/parse.c 00:08:34.423 1 8 libtcmalloc_minimal.so 00:08:34.423 ----------------------------------------------------- 00:08:34.423 00:08:34.423 22:49:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:34.423 22:49:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:34.423 22:49:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:34.423 22:49:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:34.423 22:49:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:34.423 22:49:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:34.423 22:49:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:34.423 22:49:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:34.423 22:49:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:34.685 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:34.685 fio-3.35 00:08:34.685 Starting 1 thread 00:08:39.974 00:08:39.974 test: (groupid=0, jobs=1): err= 0: pid=66040: Fri Dec 13 22:49:18 2024 00:08:39.974 read: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(131MiB/2001msec) 00:08:39.974 slat (nsec): min=4199, max=98043, avg=6163.54, stdev=3418.86 00:08:39.974 clat (usec): min=651, max=14247, avg=3770.79, stdev=1407.63 00:08:39.974 lat (usec): min=656, max=14345, avg=3776.96, stdev=1409.24 00:08:39.974 clat percentiles (usec): 00:08:39.974 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:08:39.974 | 30.00th=[ 2900], 40.00th=[ 3097], 50.00th=[ 3326], 60.00th=[ 3556], 00:08:39.974 | 70.00th=[ 3982], 80.00th=[ 4817], 90.00th=[ 5997], 95.00th=[ 6718], 00:08:39.974 | 99.00th=[ 8094], 99.50th=[ 8717], 99.90th=[10290], 99.95th=[11863], 00:08:39.974 | 99.99th=[14091] 00:08:39.974 bw ( KiB/s): min=58992, max=70024, per=93.65%, avg=63016.00, stdev=6091.34, samples=3 00:08:39.974 iops : min=14748, max=17506, avg=15754.00, stdev=1522.84, samples=3 00:08:39.974 write: IOPS=16.8k, BW=65.8MiB/s (69.0MB/s)(132MiB/2001msec); 0 zone resets 00:08:39.974 slat (nsec): min=4282, max=89620, avg=6480.93, stdev=3481.48 00:08:39.974 clat (usec): min=268, max=14162, avg=3800.81, stdev=1426.12 00:08:39.974 lat (usec): min=273, max=14182, avg=3807.29, stdev=1427.78 00:08:39.974 clat percentiles (usec): 00:08:39.974 | 1.00th=[ 2180], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:08:39.974 | 30.00th=[ 2900], 40.00th=[ 3130], 50.00th=[ 3326], 60.00th=[ 3589], 00:08:39.974 | 70.00th=[ 4015], 80.00th=[ 4883], 90.00th=[ 6063], 95.00th=[ 6783], 00:08:39.974 | 99.00th=[ 8225], 99.50th=[ 8717], 99.90th=[10552], 99.95th=[11863], 00:08:39.974 | 99.99th=[14091] 00:08:39.974 bw ( KiB/s): min=58096, max=69464, per=92.99%, avg=62669.33, stdev=6000.72, samples=3 00:08:39.974 iops : min=14524, max=17366, avg=15667.33, stdev=1500.18, samples=3 00:08:39.974 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:08:39.974 lat (msec) : 2=0.29%, 4=69.92%, 10=29.63%, 20=0.12% 00:08:39.974 cpu : usr=98.65%, sys=0.10%, ctx=4, majf=0, minf=608 00:08:39.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:39.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:39.974 issued rwts: total=33660,33713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:39.974 00:08:39.974 Run status group 0 (all jobs): 00:08:39.974 READ: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=131MiB (138MB), run=2001-2001msec 00:08:39.974 WRITE: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=132MiB (138MB), run=2001-2001msec 00:08:39.974 ----------------------------------------------------- 00:08:39.974 Suppressions used: 00:08:39.974 count bytes template 00:08:39.974 1 32 /usr/src/fio/parse.c 00:08:39.974 1 8 libtcmalloc_minimal.so 00:08:39.974 ----------------------------------------------------- 00:08:39.974 00:08:39.974 22:49:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:39.974 22:49:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:39.974 22:49:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:39.974 22:49:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:39.974 22:49:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:39.974 22:49:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:40.236 22:49:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:40.236 22:49:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:40.236 22:49:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:40.498 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:40.498 fio-3.35 00:08:40.498 Starting 1 thread 00:08:47.088 00:08:47.088 test: (groupid=0, jobs=1): err= 0: pid=66091: Fri Dec 13 22:49:25 2024 00:08:47.088 read: IOPS=18.9k, BW=73.9MiB/s (77.5MB/s)(148MiB/2001msec) 00:08:47.088 slat (nsec): min=3356, max=71914, avg=5538.78, stdev=3051.64 00:08:47.088 clat (usec): min=294, max=10310, avg=3373.65, stdev=1177.19 00:08:47.088 lat (usec): min=312, max=10342, avg=3379.19, stdev=1178.54 00:08:47.088 clat percentiles (usec): 00:08:47.088 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:08:47.088 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 3097], 00:08:47.088 | 70.00th=[ 3458], 80.00th=[ 4178], 90.00th=[ 5145], 95.00th=[ 5997], 00:08:47.088 | 99.00th=[ 7242], 99.50th=[ 7767], 99.90th=[ 8586], 99.95th=[ 9110], 00:08:47.088 | 99.99th=[10159] 00:08:47.088 bw ( KiB/s): min=72624, max=77548, per=99.03%, avg=74916.00, stdev=2479.55, samples=3 00:08:47.088 iops : min=18156, max=19387, avg=18729.00, stdev=619.89, samples=3 00:08:47.088 write: IOPS=18.9k, BW=73.9MiB/s (77.5MB/s)(148MiB/2001msec); 0 zone resets 00:08:47.088 slat (nsec): min=3539, max=80185, avg=5771.18, stdev=2897.86 00:08:47.088 clat (usec): min=811, max=10223, avg=3369.20, stdev=1159.99 00:08:47.088 lat (usec): min=817, max=10238, avg=3374.97, stdev=1161.31 00:08:47.088 clat percentiles (usec): 00:08:47.088 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2540], 00:08:47.088 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 3097], 00:08:47.088 | 70.00th=[ 3458], 80.00th=[ 4146], 90.00th=[ 5080], 95.00th=[ 5997], 00:08:47.088 | 99.00th=[ 7177], 99.50th=[ 7635], 99.90th=[ 8586], 99.95th=[ 8979], 00:08:47.088 | 99.99th=[10159] 00:08:47.088 bw ( KiB/s): min=72776, max=77461, per=99.05%, avg=74951.00, stdev=2360.40, samples=3 00:08:47.088 iops : min=18194, max=19365, avg=18737.67, stdev=589.97, samples=3 00:08:47.088 lat (usec) : 500=0.01%, 1000=0.01% 00:08:47.088 lat (msec) : 2=0.48%, 4=77.52%, 10=21.97%, 20=0.02% 00:08:47.088 cpu : usr=98.90%, sys=0.10%, ctx=23, majf=0, minf=607 00:08:47.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:47.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:47.088 issued rwts: total=37842,37855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:47.088 00:08:47.088 Run status group 0 (all jobs): 00:08:47.088 READ: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=148MiB (155MB), run=2001-2001msec 00:08:47.088 WRITE: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=148MiB (155MB), run=2001-2001msec 00:08:47.088 ----------------------------------------------------- 00:08:47.088 Suppressions used: 00:08:47.088 count bytes template 00:08:47.088 1 32 /usr/src/fio/parse.c 00:08:47.088 1 8 libtcmalloc_minimal.so 00:08:47.088 ----------------------------------------------------- 00:08:47.088 00:08:47.088 22:49:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:47.088 22:49:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:47.088 22:49:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:47.088 22:49:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:47.088 22:49:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:47.088 22:49:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:47.088 22:49:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:47.088 22:49:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:47.088 22:49:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:47.350 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:47.350 fio-3.35 00:08:47.350 Starting 1 thread 00:08:57.347 00:08:57.347 test: (groupid=0, jobs=1): err= 0: pid=66153: Fri Dec 13 22:49:34 2024 00:08:57.347 read: IOPS=19.4k, BW=75.7MiB/s (79.4MB/s)(152MiB/2001msec) 00:08:57.347 slat (nsec): min=3405, max=56646, avg=5279.87, stdev=2348.09 00:08:57.347 clat (usec): min=609, max=11485, avg=3174.29, stdev=964.72 00:08:57.347 lat (usec): min=616, max=11510, avg=3179.57, stdev=965.62 00:08:57.347 clat percentiles (usec): 00:08:57.347 | 1.00th=[ 1418], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2573], 00:08:57.347 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2999], 00:08:57.347 | 70.00th=[ 3228], 80.00th=[ 3720], 90.00th=[ 4555], 95.00th=[ 5211], 00:08:57.347 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 9503], 00:08:57.347 | 99.99th=[10945] 00:08:57.347 bw ( KiB/s): min=73648, max=84512, per=100.00%, avg=78016.00, stdev=5736.11, samples=3 00:08:57.347 iops : min=18412, max=21128, avg=19504.00, stdev=1434.03, samples=3 00:08:57.347 write: IOPS=19.3k, BW=75.6MiB/s (79.3MB/s)(151MiB/2001msec); 0 zone resets 00:08:57.347 slat (nsec): min=3531, max=61453, avg=5432.35, stdev=2340.14 00:08:57.347 clat (usec): min=567, max=28983, avg=3408.52, stdev=2063.35 00:08:57.347 lat (usec): min=575, max=28988, avg=3413.95, stdev=2063.73 00:08:57.347 clat percentiles (usec): 00:08:57.347 | 1.00th=[ 1631], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2606], 00:08:57.347 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 3032], 00:08:57.347 | 70.00th=[ 3261], 80.00th=[ 3818], 90.00th=[ 4686], 95.00th=[ 5538], 00:08:57.347 | 99.00th=[13042], 99.50th=[20317], 99.90th=[26346], 99.95th=[27132], 00:08:57.347 | 99.99th=[28443] 00:08:57.347 bw ( KiB/s): min=73632, max=84672, per=100.00%, avg=78178.67, stdev=5771.70, samples=3 00:08:57.347 iops : min=18408, max=21168, avg=19544.67, stdev=1442.93, samples=3 00:08:57.347 lat (usec) : 750=0.02%, 1000=0.09% 00:08:57.347 lat (msec) : 2=2.01%, 4=81.15%, 10=16.07%, 20=0.39%, 50=0.26% 00:08:57.347 cpu : usr=99.05%, sys=0.15%, ctx=3, majf=0, minf=605 00:08:57.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:57.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.347 issued rwts: total=38797,38721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.347 00:08:57.347 Run status group 0 (all jobs): 00:08:57.347 READ: bw=75.7MiB/s (79.4MB/s), 75.7MiB/s-75.7MiB/s (79.4MB/s-79.4MB/s), io=152MiB (159MB), run=2001-2001msec 00:08:57.347 WRITE: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=151MiB (159MB), run=2001-2001msec 00:08:57.347 ----------------------------------------------------- 00:08:57.347 Suppressions used: 00:08:57.347 count bytes template 00:08:57.347 1 32 /usr/src/fio/parse.c 00:08:57.347 1 8 libtcmalloc_minimal.so 00:08:57.347 ----------------------------------------------------- 00:08:57.347 00:08:57.347 ************************************ 00:08:57.347 END TEST nvme_fio 00:08:57.347 ************************************ 00:08:57.347 22:49:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:57.347 22:49:35 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:57.347 00:08:57.347 real 0m27.721s 00:08:57.347 user 0m16.130s 00:08:57.347 sys 0m20.762s 00:08:57.347 22:49:35 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.347 22:49:35 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:57.347 ************************************ 00:08:57.347 END TEST nvme 00:08:57.347 ************************************ 00:08:57.347 00:08:57.347 real 1m38.010s 00:08:57.347 user 3m38.522s 00:08:57.347 sys 0m31.663s 00:08:57.347 22:49:35 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.347 22:49:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.347 22:49:35 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:57.347 22:49:35 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:57.347 22:49:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.348 22:49:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.348 22:49:35 -- common/autotest_common.sh@10 -- # set +x 00:08:57.348 ************************************ 00:08:57.348 START TEST nvme_scc 00:08:57.348 ************************************ 00:08:57.348 22:49:35 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:57.348 * Looking for test storage... 00:08:57.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:57.348 22:49:35 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:57.348 22:49:35 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:57.348 22:49:35 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:57.348 22:49:35 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:57.348 22:49:35 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.348 22:49:35 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:57.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.348 --rc genhtml_branch_coverage=1 00:08:57.348 --rc genhtml_function_coverage=1 00:08:57.348 --rc genhtml_legend=1 00:08:57.348 --rc geninfo_all_blocks=1 00:08:57.348 --rc geninfo_unexecuted_blocks=1 00:08:57.348 00:08:57.348 ' 00:08:57.348 22:49:35 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:57.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.348 --rc genhtml_branch_coverage=1 00:08:57.348 --rc genhtml_function_coverage=1 00:08:57.348 --rc genhtml_legend=1 00:08:57.348 --rc geninfo_all_blocks=1 00:08:57.348 --rc geninfo_unexecuted_blocks=1 00:08:57.348 00:08:57.348 ' 00:08:57.348 22:49:35 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:57.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.348 --rc genhtml_branch_coverage=1 00:08:57.348 --rc genhtml_function_coverage=1 00:08:57.348 --rc genhtml_legend=1 00:08:57.348 --rc geninfo_all_blocks=1 00:08:57.348 --rc geninfo_unexecuted_blocks=1 00:08:57.348 00:08:57.348 ' 00:08:57.348 22:49:35 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:57.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.348 --rc genhtml_branch_coverage=1 00:08:57.348 --rc genhtml_function_coverage=1 00:08:57.348 --rc genhtml_legend=1 00:08:57.348 --rc geninfo_all_blocks=1 00:08:57.348 --rc geninfo_unexecuted_blocks=1 00:08:57.348 00:08:57.348 ' 00:08:57.348 22:49:35 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.348 22:49:35 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.348 22:49:35 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.348 22:49:35 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.348 22:49:35 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.348 22:49:35 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:57.348 22:49:35 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:57.348 22:49:35 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:57.348 22:49:35 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.348 22:49:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:57.348 22:49:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:57.348 22:49:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:57.348 22:49:35 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:57.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:57.348 Waiting for block devices as requested 00:08:57.348 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.348 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.348 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.348 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:02.674 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:02.674 22:49:41 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:02.674 22:49:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:02.674 22:49:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:02.674 22:49:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:02.674 22:49:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:02.674 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.675 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:02.676 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:02.677 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.678 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.679 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:02.680 22:49:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:02.680 22:49:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:02.680 22:49:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:02.680 22:49:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.680 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:02.681 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.682 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.683 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.684 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.685 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:02.686 22:49:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:02.686 22:49:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:02.686 22:49:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:02.686 22:49:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:02.686 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.687 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:02.688 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.689 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:02.690 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:02.691 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.692 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.693 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.694 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.695 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.696 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.697 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:02.698 22:49:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:02.698 22:49:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:02.698 22:49:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:02.698 22:49:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.698 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.699 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.700 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:02.701 22:49:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:02.701 22:49:41 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:02.702 22:49:41 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:02.702 22:49:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:02.702 22:49:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:02.702 22:49:41 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:02.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:03.524 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.524 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.524 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.524 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.525 22:49:42 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:03.525 22:49:42 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:03.525 22:49:42 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.525 22:49:42 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:03.525 ************************************ 00:09:03.525 START TEST nvme_simple_copy 00:09:03.525 ************************************ 00:09:03.525 22:49:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:03.783 Initializing NVMe Controllers 00:09:03.783 Attaching to 0000:00:10.0 00:09:03.783 Controller supports SCC. Attached to 0000:00:10.0 00:09:03.783 Namespace ID: 1 size: 6GB 00:09:03.783 Initialization complete. 00:09:03.783 00:09:03.783 Controller QEMU NVMe Ctrl (12340 ) 00:09:03.783 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:03.783 Namespace Block Size:4096 00:09:03.783 Writing LBAs 0 to 63 with Random Data 00:09:03.783 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:03.783 LBAs matching Written Data: 64 00:09:03.783 00:09:03.783 real 0m0.244s 00:09:03.783 user 0m0.088s 00:09:03.783 sys 0m0.056s 00:09:03.783 22:49:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.783 ************************************ 00:09:03.783 END TEST nvme_simple_copy 00:09:03.783 ************************************ 00:09:03.783 22:49:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:03.783 ************************************ 00:09:03.783 END TEST nvme_scc 00:09:03.783 ************************************ 00:09:03.783 00:09:03.783 real 0m7.651s 00:09:03.783 user 0m1.103s 00:09:03.783 sys 0m1.362s 00:09:03.783 22:49:42 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.783 22:49:42 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:03.783 22:49:42 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:03.783 22:49:42 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:03.783 22:49:42 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:03.783 22:49:42 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:03.783 22:49:42 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:03.783 22:49:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.783 22:49:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.783 22:49:42 -- common/autotest_common.sh@10 -- # set +x 00:09:03.783 ************************************ 00:09:03.783 START TEST nvme_fdp 00:09:03.783 ************************************ 00:09:03.783 22:49:42 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:04.041 * Looking for test storage... 00:09:04.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:04.041 22:49:42 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:04.041 22:49:42 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:04.042 22:49:42 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:04.042 22:49:43 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:04.042 22:49:43 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.042 22:49:43 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:04.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.042 --rc genhtml_branch_coverage=1 00:09:04.042 --rc genhtml_function_coverage=1 00:09:04.042 --rc genhtml_legend=1 00:09:04.042 --rc geninfo_all_blocks=1 00:09:04.042 --rc geninfo_unexecuted_blocks=1 00:09:04.042 00:09:04.042 ' 00:09:04.042 22:49:43 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:04.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.042 --rc genhtml_branch_coverage=1 00:09:04.042 --rc genhtml_function_coverage=1 00:09:04.042 --rc genhtml_legend=1 00:09:04.042 --rc geninfo_all_blocks=1 00:09:04.042 --rc geninfo_unexecuted_blocks=1 00:09:04.042 00:09:04.042 ' 00:09:04.042 22:49:43 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:04.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.042 --rc genhtml_branch_coverage=1 00:09:04.042 --rc genhtml_function_coverage=1 00:09:04.042 --rc genhtml_legend=1 00:09:04.042 --rc geninfo_all_blocks=1 00:09:04.042 --rc geninfo_unexecuted_blocks=1 00:09:04.042 00:09:04.042 ' 00:09:04.042 22:49:43 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:04.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.042 --rc genhtml_branch_coverage=1 00:09:04.042 --rc genhtml_function_coverage=1 00:09:04.042 --rc genhtml_legend=1 00:09:04.042 --rc geninfo_all_blocks=1 00:09:04.042 --rc geninfo_unexecuted_blocks=1 00:09:04.042 00:09:04.042 ' 00:09:04.042 22:49:43 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.042 22:49:43 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.042 22:49:43 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.042 22:49:43 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.042 22:49:43 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.042 22:49:43 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:04.042 22:49:43 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:04.042 22:49:43 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:04.042 22:49:43 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:04.042 22:49:43 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:04.300 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:04.558 Waiting for block devices as requested 00:09:04.558 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.558 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.558 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.558 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.824 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:09.825 22:49:48 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:09.825 22:49:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:09.825 22:49:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:09.825 22:49:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:09.825 22:49:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.825 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.826 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:09.827 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:09.828 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:09.829 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:09.830 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:09.831 22:49:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:09.831 22:49:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:09.831 22:49:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:09.831 22:49:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.831 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.832 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:09.833 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:09.834 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.835 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.836 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:09.837 22:49:48 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:09.837 22:49:48 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:09.837 22:49:48 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:09.837 22:49:48 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:09.837 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:09.838 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.104 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:10.105 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:10.106 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:48 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:10.107 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.107 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.107 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:10.107 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:10.107 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.107 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.107 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.107 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:10.107 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:10.108 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:10.109 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:10.110 22:49:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:10.111 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:10.112 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:10.113 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:10.114 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:10.115 22:49:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:10.115 22:49:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:10.115 22:49:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:10.115 22:49:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:10.115 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.116 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.117 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:10.118 22:49:49 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:10.118 22:49:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:10.119 22:49:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:10.119 22:49:49 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:10.119 22:49:49 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:10.119 22:49:49 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:10.119 22:49:49 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:10.119 22:49:49 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:10.119 22:49:49 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:10.684 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:10.942 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:10.942 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:10.942 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:11.200 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:11.200 22:49:50 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:11.200 22:49:50 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:11.200 22:49:50 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.200 22:49:50 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:11.200 ************************************ 00:09:11.200 START TEST nvme_flexible_data_placement 00:09:11.200 ************************************ 00:09:11.200 22:49:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:11.458 Initializing NVMe Controllers 00:09:11.458 Attaching to 0000:00:13.0 00:09:11.458 Controller supports FDP Attached to 0000:00:13.0 00:09:11.458 Namespace ID: 1 Endurance Group ID: 1 00:09:11.458 Initialization complete. 00:09:11.458 00:09:11.458 ================================== 00:09:11.458 == FDP tests for Namespace: #01 == 00:09:11.458 ================================== 00:09:11.458 00:09:11.458 Get Feature: FDP: 00:09:11.458 ================= 00:09:11.458 Enabled: Yes 00:09:11.458 FDP configuration Index: 0 00:09:11.458 00:09:11.458 FDP configurations log page 00:09:11.458 =========================== 00:09:11.458 Number of FDP configurations: 1 00:09:11.458 Version: 0 00:09:11.458 Size: 112 00:09:11.458 FDP Configuration Descriptor: 0 00:09:11.458 Descriptor Size: 96 00:09:11.458 Reclaim Group Identifier format: 2 00:09:11.458 FDP Volatile Write Cache: Not Present 00:09:11.458 FDP Configuration: Valid 00:09:11.458 Vendor Specific Size: 0 00:09:11.458 Number of Reclaim Groups: 2 00:09:11.458 Number of Recalim Unit Handles: 8 00:09:11.458 Max Placement Identifiers: 128 00:09:11.458 Number of Namespaces Suppprted: 256 00:09:11.458 Reclaim unit Nominal Size: 6000000 bytes 00:09:11.458 Estimated Reclaim Unit Time Limit: Not Reported 00:09:11.458 RUH Desc #000: RUH Type: Initially Isolated 00:09:11.458 RUH Desc #001: RUH Type: Initially Isolated 00:09:11.459 RUH Desc #002: RUH Type: Initially Isolated 00:09:11.459 RUH Desc #003: RUH Type: Initially Isolated 00:09:11.459 RUH Desc #004: RUH Type: Initially Isolated 00:09:11.459 RUH Desc #005: RUH Type: Initially Isolated 00:09:11.459 RUH Desc #006: RUH Type: Initially Isolated 00:09:11.459 RUH Desc #007: RUH Type: Initially Isolated 00:09:11.459 00:09:11.459 FDP reclaim unit handle usage log page 00:09:11.459 ====================================== 00:09:11.459 Number of Reclaim Unit Handles: 8 00:09:11.459 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:11.459 RUH Usage Desc #001: RUH Attributes: Unused 00:09:11.459 RUH Usage Desc #002: RUH Attributes: Unused 00:09:11.459 RUH Usage Desc #003: RUH Attributes: Unused 00:09:11.459 RUH Usage Desc #004: RUH Attributes: Unused 00:09:11.459 RUH Usage Desc #005: RUH Attributes: Unused 00:09:11.459 RUH Usage Desc #006: RUH Attributes: Unused 00:09:11.459 RUH Usage Desc #007: RUH Attributes: Unused 00:09:11.459 00:09:11.459 FDP statistics log page 00:09:11.459 ======================= 00:09:11.459 Host bytes with metadata written: 1055440896 00:09:11.459 Media bytes with metadata written: 1055621120 00:09:11.459 Media bytes erased: 0 00:09:11.459 00:09:11.459 FDP Reclaim unit handle status 00:09:11.459 ============================== 00:09:11.459 Number of RUHS descriptors: 2 00:09:11.459 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003174 00:09:11.459 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:11.459 00:09:11.459 FDP write on placement id: 0 success 00:09:11.459 00:09:11.459 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:11.459 00:09:11.459 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:11.459 00:09:11.459 Get Feature: FDP Events for Placement handle: #0 00:09:11.459 ======================== 00:09:11.459 Number of FDP Events: 6 00:09:11.459 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:11.459 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:11.459 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:11.459 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:11.459 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:11.459 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:11.459 00:09:11.459 FDP events log page 00:09:11.459 =================== 00:09:11.459 Number of FDP events: 1 00:09:11.459 FDP Event #0: 00:09:11.459 Event Type: RU Not Written to Capacity 00:09:11.459 Placement Identifier: Valid 00:09:11.459 NSID: Valid 00:09:11.459 Location: Valid 00:09:11.459 Placement Identifier: 0 00:09:11.459 Event Timestamp: d 00:09:11.459 Namespace Identifier: 1 00:09:11.459 Reclaim Group Identifier: 0 00:09:11.459 Reclaim Unit Handle Identifier: 0 00:09:11.459 00:09:11.459 FDP test passed 00:09:11.459 00:09:11.459 real 0m0.244s 00:09:11.459 user 0m0.078s 00:09:11.459 sys 0m0.065s 00:09:11.459 22:49:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.459 22:49:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:11.459 ************************************ 00:09:11.459 END TEST nvme_flexible_data_placement 00:09:11.459 ************************************ 00:09:11.459 00:09:11.459 real 0m7.531s 00:09:11.459 user 0m1.063s 00:09:11.459 sys 0m1.382s 00:09:11.459 22:49:50 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.459 22:49:50 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:11.459 ************************************ 00:09:11.459 END TEST nvme_fdp 00:09:11.459 ************************************ 00:09:11.459 22:49:50 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:11.459 22:49:50 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:11.459 22:49:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.459 22:49:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.459 22:49:50 -- common/autotest_common.sh@10 -- # set +x 00:09:11.459 ************************************ 00:09:11.459 START TEST nvme_rpc 00:09:11.459 ************************************ 00:09:11.459 22:49:50 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:11.459 * Looking for test storage... 00:09:11.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:11.459 22:49:50 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:11.459 22:49:50 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:11.459 22:49:50 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:11.459 22:49:50 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.459 22:49:50 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:11.717 22:49:50 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.717 22:49:50 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:11.717 22:49:50 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:11.717 22:49:50 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.717 22:49:50 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:11.717 22:49:50 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.717 22:49:50 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.717 22:49:50 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.717 22:49:50 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:11.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.717 --rc genhtml_branch_coverage=1 00:09:11.717 --rc genhtml_function_coverage=1 00:09:11.717 --rc genhtml_legend=1 00:09:11.717 --rc geninfo_all_blocks=1 00:09:11.717 --rc geninfo_unexecuted_blocks=1 00:09:11.717 00:09:11.717 ' 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:11.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.717 --rc genhtml_branch_coverage=1 00:09:11.717 --rc genhtml_function_coverage=1 00:09:11.717 --rc genhtml_legend=1 00:09:11.717 --rc geninfo_all_blocks=1 00:09:11.717 --rc geninfo_unexecuted_blocks=1 00:09:11.717 00:09:11.717 ' 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:11.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.717 --rc genhtml_branch_coverage=1 00:09:11.717 --rc genhtml_function_coverage=1 00:09:11.717 --rc genhtml_legend=1 00:09:11.717 --rc geninfo_all_blocks=1 00:09:11.717 --rc geninfo_unexecuted_blocks=1 00:09:11.717 00:09:11.717 ' 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:11.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.717 --rc genhtml_branch_coverage=1 00:09:11.717 --rc genhtml_function_coverage=1 00:09:11.717 --rc genhtml_legend=1 00:09:11.717 --rc geninfo_all_blocks=1 00:09:11.717 --rc geninfo_unexecuted_blocks=1 00:09:11.717 00:09:11.717 ' 00:09:11.717 22:49:50 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.717 22:49:50 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:11.717 22:49:50 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:11.717 22:49:50 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67529 00:09:11.717 22:49:50 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:11.717 22:49:50 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67529 00:09:11.717 22:49:50 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67529 ']' 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.717 22:49:50 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.717 [2024-12-13 22:49:50.735426] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:11.717 [2024-12-13 22:49:50.735543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67529 ] 00:09:11.975 [2024-12-13 22:49:50.894969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:11.975 [2024-12-13 22:49:50.991370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.975 [2024-12-13 22:49:50.991398] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.541 22:49:51 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.541 22:49:51 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:12.541 22:49:51 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:12.799 Nvme0n1 00:09:12.799 22:49:51 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:12.799 22:49:51 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:13.057 request: 00:09:13.057 { 00:09:13.057 "bdev_name": "Nvme0n1", 00:09:13.057 "filename": "non_existing_file", 00:09:13.057 "method": "bdev_nvme_apply_firmware", 00:09:13.057 "req_id": 1 00:09:13.057 } 00:09:13.057 Got JSON-RPC error response 00:09:13.057 response: 00:09:13.057 { 00:09:13.057 "code": -32603, 00:09:13.057 "message": "open file failed." 00:09:13.057 } 00:09:13.057 22:49:52 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:13.057 22:49:52 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:13.058 22:49:52 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:13.316 22:49:52 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:13.316 22:49:52 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67529 00:09:13.316 22:49:52 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67529 ']' 00:09:13.316 22:49:52 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67529 00:09:13.316 22:49:52 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:13.316 22:49:52 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.316 22:49:52 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67529 00:09:13.316 killing process with pid 67529 00:09:13.316 22:49:52 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.316 22:49:52 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.316 22:49:52 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67529' 00:09:13.316 22:49:52 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67529 00:09:13.316 22:49:52 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67529 00:09:14.722 00:09:14.722 real 0m3.241s 00:09:14.722 user 0m6.218s 00:09:14.722 sys 0m0.486s 00:09:14.722 22:49:53 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.722 22:49:53 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.722 ************************************ 00:09:14.722 END TEST nvme_rpc 00:09:14.722 ************************************ 00:09:14.722 22:49:53 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:14.722 22:49:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.722 22:49:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.722 22:49:53 -- common/autotest_common.sh@10 -- # set +x 00:09:14.722 ************************************ 00:09:14.722 START TEST nvme_rpc_timeouts 00:09:14.722 ************************************ 00:09:14.722 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:14.722 * Looking for test storage... 00:09:14.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:14.722 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.722 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.722 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.987 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.987 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.988 22:49:53 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:14.988 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.988 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.988 --rc genhtml_branch_coverage=1 00:09:14.988 --rc genhtml_function_coverage=1 00:09:14.988 --rc genhtml_legend=1 00:09:14.988 --rc geninfo_all_blocks=1 00:09:14.988 --rc geninfo_unexecuted_blocks=1 00:09:14.988 00:09:14.988 ' 00:09:14.988 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.988 --rc genhtml_branch_coverage=1 00:09:14.988 --rc genhtml_function_coverage=1 00:09:14.988 --rc genhtml_legend=1 00:09:14.988 --rc geninfo_all_blocks=1 00:09:14.988 --rc geninfo_unexecuted_blocks=1 00:09:14.988 00:09:14.988 ' 00:09:14.988 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.988 --rc genhtml_branch_coverage=1 00:09:14.988 --rc genhtml_function_coverage=1 00:09:14.988 --rc genhtml_legend=1 00:09:14.988 --rc geninfo_all_blocks=1 00:09:14.988 --rc geninfo_unexecuted_blocks=1 00:09:14.988 00:09:14.988 ' 00:09:14.988 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.988 --rc genhtml_branch_coverage=1 00:09:14.988 --rc genhtml_function_coverage=1 00:09:14.988 --rc genhtml_legend=1 00:09:14.988 --rc geninfo_all_blocks=1 00:09:14.988 --rc geninfo_unexecuted_blocks=1 00:09:14.988 00:09:14.988 ' 00:09:14.988 22:49:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.988 22:49:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67594 00:09:14.988 22:49:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67594 00:09:14.988 22:49:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67626 00:09:14.988 22:49:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:14.988 22:49:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67626 00:09:14.988 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67626 ']' 00:09:14.988 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.988 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.988 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.988 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.988 22:49:53 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:14.988 22:49:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:14.988 [2024-12-13 22:49:53.952793] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:14.988 [2024-12-13 22:49:53.952905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67626 ] 00:09:14.988 [2024-12-13 22:49:54.110090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:15.246 [2024-12-13 22:49:54.205320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.246 [2024-12-13 22:49:54.205512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.812 22:49:54 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.812 22:49:54 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:15.812 Checking default timeout settings: 00:09:15.812 22:49:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:15.812 22:49:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:16.069 Making settings changes with rpc: 00:09:16.070 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:16.070 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:16.327 Check default vs. modified settings: 00:09:16.327 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:16.327 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67594 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67594 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:16.585 Setting action_on_timeout is changed as expected. 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67594 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.585 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67594 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:16.586 Setting timeout_us is changed as expected. 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67594 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67594 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:16.586 Setting timeout_admin_us is changed as expected. 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67594 /tmp/settings_modified_67594 00:09:16.586 22:49:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67626 00:09:16.586 22:49:55 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67626 ']' 00:09:16.586 22:49:55 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67626 00:09:16.586 22:49:55 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:16.586 22:49:55 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.586 22:49:55 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67626 00:09:16.586 22:49:55 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.586 22:49:55 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.586 killing process with pid 67626 00:09:16.586 22:49:55 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67626' 00:09:16.586 22:49:55 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67626 00:09:16.586 22:49:55 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67626 00:09:17.960 RPC TIMEOUT SETTING TEST PASSED. 00:09:17.960 22:49:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:17.960 00:09:17.960 real 0m3.114s 00:09:17.960 user 0m6.121s 00:09:17.960 sys 0m0.454s 00:09:17.960 22:49:56 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.960 22:49:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:17.960 ************************************ 00:09:17.960 END TEST nvme_rpc_timeouts 00:09:17.960 ************************************ 00:09:17.960 22:49:56 -- spdk/autotest.sh@239 -- # uname -s 00:09:17.960 22:49:56 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:17.960 22:49:56 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:17.960 22:49:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.960 22:49:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.960 22:49:56 -- common/autotest_common.sh@10 -- # set +x 00:09:17.960 ************************************ 00:09:17.960 START TEST sw_hotplug 00:09:17.960 ************************************ 00:09:17.960 22:49:56 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:17.960 * Looking for test storage... 00:09:17.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:17.960 22:49:56 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:17.960 22:49:56 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:17.960 22:49:56 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:17.960 22:49:57 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.960 22:49:57 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:17.960 22:49:57 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.960 22:49:57 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.960 --rc genhtml_branch_coverage=1 00:09:17.960 --rc genhtml_function_coverage=1 00:09:17.960 --rc genhtml_legend=1 00:09:17.960 --rc geninfo_all_blocks=1 00:09:17.960 --rc geninfo_unexecuted_blocks=1 00:09:17.960 00:09:17.960 ' 00:09:17.960 22:49:57 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.960 --rc genhtml_branch_coverage=1 00:09:17.960 --rc genhtml_function_coverage=1 00:09:17.960 --rc genhtml_legend=1 00:09:17.960 --rc geninfo_all_blocks=1 00:09:17.960 --rc geninfo_unexecuted_blocks=1 00:09:17.960 00:09:17.960 ' 00:09:17.960 22:49:57 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.960 --rc genhtml_branch_coverage=1 00:09:17.960 --rc genhtml_function_coverage=1 00:09:17.960 --rc genhtml_legend=1 00:09:17.960 --rc geninfo_all_blocks=1 00:09:17.960 --rc geninfo_unexecuted_blocks=1 00:09:17.960 00:09:17.960 ' 00:09:17.960 22:49:57 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.960 --rc genhtml_branch_coverage=1 00:09:17.960 --rc genhtml_function_coverage=1 00:09:17.960 --rc genhtml_legend=1 00:09:17.960 --rc geninfo_all_blocks=1 00:09:17.960 --rc geninfo_unexecuted_blocks=1 00:09:17.960 00:09:17.960 ' 00:09:17.960 22:49:57 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:18.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.476 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:18.476 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:18.476 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:18.476 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:18.476 22:49:57 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:18.476 22:49:57 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:18.476 22:49:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:18.476 22:49:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:18.476 22:49:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:18.477 22:49:57 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:18.477 22:49:57 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:18.477 22:49:57 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:18.477 22:49:57 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:18.735 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.993 Waiting for block devices as requested 00:09:18.993 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.993 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.993 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.251 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.515 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:24.515 22:50:03 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:24.515 22:50:03 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:24.515 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:24.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.515 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:24.773 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:25.032 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.032 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.032 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:25.032 22:50:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68476 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:25.291 22:50:04 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:25.291 22:50:04 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:25.291 22:50:04 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:25.291 22:50:04 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:25.291 22:50:04 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:25.291 22:50:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:25.291 Initializing NVMe Controllers 00:09:25.291 Attaching to 0000:00:10.0 00:09:25.291 Attaching to 0000:00:11.0 00:09:25.291 Attached to 0000:00:11.0 00:09:25.291 Attached to 0000:00:10.0 00:09:25.291 Initialization complete. Starting I/O... 00:09:25.291 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:25.291 QEMU NVMe Ctrl (12340 ): 1 I/Os completed (+1) 00:09:25.291 00:09:26.663 QEMU NVMe Ctrl (12341 ): 2785 I/Os completed (+2785) 00:09:26.663 QEMU NVMe Ctrl (12340 ): 3073 I/Os completed (+3072) 00:09:26.663 00:09:27.597 QEMU NVMe Ctrl (12341 ): 6019 I/Os completed (+3234) 00:09:27.597 QEMU NVMe Ctrl (12340 ): 6246 I/Os completed (+3173) 00:09:27.597 00:09:28.543 QEMU NVMe Ctrl (12341 ): 9704 I/Os completed (+3685) 00:09:28.543 QEMU NVMe Ctrl (12340 ): 9929 I/Os completed (+3683) 00:09:28.543 00:09:29.476 QEMU NVMe Ctrl (12341 ): 13087 I/Os completed (+3383) 00:09:29.476 QEMU NVMe Ctrl (12340 ): 13257 I/Os completed (+3328) 00:09:29.476 00:09:30.410 QEMU NVMe Ctrl (12341 ): 16447 I/Os completed (+3360) 00:09:30.410 QEMU NVMe Ctrl (12340 ): 16494 I/Os completed (+3237) 00:09:30.410 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:31.342 [2024-12-13 22:50:10.185786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:31.342 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:31.342 [2024-12-13 22:50:10.187178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.187306] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.187346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.187413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:31.342 [2024-12-13 22:50:10.189322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.189398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.189430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.189500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:31.342 [2024-12-13 22:50:10.207779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:31.342 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:31.342 [2024-12-13 22:50:10.209002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.209042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.209062] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.209078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:31.342 [2024-12-13 22:50:10.210797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.210835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.210850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 [2024-12-13 22:50:10.210863] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:31.342 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:31.342 EAL: Scan for (pci) bus failed. 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:31.342 Attaching to 0000:00:10.0 00:09:31.342 Attached to 0000:00:10.0 00:09:31.342 QEMU NVMe Ctrl (12340 ): 44 I/Os completed (+44) 00:09:31.342 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:31.342 22:50:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:31.342 Attaching to 0000:00:11.0 00:09:31.342 Attached to 0000:00:11.0 00:09:32.275 QEMU NVMe Ctrl (12340 ): 3610 I/Os completed (+3566) 00:09:32.275 QEMU NVMe Ctrl (12341 ): 3217 I/Os completed (+3217) 00:09:32.275 00:09:33.650 QEMU NVMe Ctrl (12340 ): 7019 I/Os completed (+3409) 00:09:33.651 QEMU NVMe Ctrl (12341 ): 6549 I/Os completed (+3332) 00:09:33.651 00:09:34.586 QEMU NVMe Ctrl (12340 ): 10371 I/Os completed (+3352) 00:09:34.586 QEMU NVMe Ctrl (12341 ): 9819 I/Os completed (+3270) 00:09:34.586 00:09:35.521 QEMU NVMe Ctrl (12340 ): 13917 I/Os completed (+3546) 00:09:35.521 QEMU NVMe Ctrl (12341 ): 13308 I/Os completed (+3489) 00:09:35.521 00:09:36.456 QEMU NVMe Ctrl (12340 ): 17595 I/Os completed (+3678) 00:09:36.456 QEMU NVMe Ctrl (12341 ): 17054 I/Os completed (+3746) 00:09:36.456 00:09:37.391 QEMU NVMe Ctrl (12340 ): 21029 I/Os completed (+3434) 00:09:37.391 QEMU NVMe Ctrl (12341 ): 20321 I/Os completed (+3267) 00:09:37.391 00:09:38.325 QEMU NVMe Ctrl (12340 ): 24736 I/Os completed (+3707) 00:09:38.325 QEMU NVMe Ctrl (12341 ): 24031 I/Os completed (+3710) 00:09:38.325 00:09:39.316 QEMU NVMe Ctrl (12340 ): 28085 I/Os completed (+3349) 00:09:39.316 QEMU NVMe Ctrl (12341 ): 27367 I/Os completed (+3336) 00:09:39.316 00:09:40.249 QEMU NVMe Ctrl (12340 ): 31424 I/Os completed (+3339) 00:09:40.249 QEMU NVMe Ctrl (12341 ): 30544 I/Os completed (+3177) 00:09:40.249 00:09:41.621 QEMU NVMe Ctrl (12340 ): 34676 I/Os completed (+3252) 00:09:41.621 QEMU NVMe Ctrl (12341 ): 33775 I/Os completed (+3231) 00:09:41.621 00:09:42.555 QEMU NVMe Ctrl (12340 ): 38194 I/Os completed (+3518) 00:09:42.555 QEMU NVMe Ctrl (12341 ): 37104 I/Os completed (+3329) 00:09:42.555 00:09:43.489 QEMU NVMe Ctrl (12340 ): 41434 I/Os completed (+3240) 00:09:43.489 QEMU NVMe Ctrl (12341 ): 40397 I/Os completed (+3293) 00:09:43.489 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:43.489 [2024-12-13 22:50:22.439012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:43.489 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:43.489 [2024-12-13 22:50:22.441472] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.441607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.441692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.441729] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:43.489 [2024-12-13 22:50:22.443741] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.443876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.443968] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.443994] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:43.489 [2024-12-13 22:50:22.462324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:43.489 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:43.489 [2024-12-13 22:50:22.463415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.463457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.463477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.463495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:43.489 [2024-12-13 22:50:22.465160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.465196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.465212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 [2024-12-13 22:50:22.465226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:43.489 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:43.747 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:43.747 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:43.747 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:43.747 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:43.747 Attaching to 0000:00:10.0 00:09:43.747 Attached to 0000:00:10.0 00:09:43.747 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:43.747 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:43.747 22:50:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:43.747 Attaching to 0000:00:11.0 00:09:43.747 Attached to 0000:00:11.0 00:09:44.313 QEMU NVMe Ctrl (12340 ): 2418 I/Os completed (+2418) 00:09:44.313 QEMU NVMe Ctrl (12341 ): 2089 I/Os completed (+2089) 00:09:44.313 00:09:45.246 QEMU NVMe Ctrl (12340 ): 5696 I/Os completed (+3278) 00:09:45.246 QEMU NVMe Ctrl (12341 ): 5211 I/Os completed (+3122) 00:09:45.246 00:09:46.620 QEMU NVMe Ctrl (12340 ): 9195 I/Os completed (+3499) 00:09:46.620 QEMU NVMe Ctrl (12341 ): 8588 I/Os completed (+3377) 00:09:46.620 00:09:47.550 QEMU NVMe Ctrl (12340 ): 12541 I/Os completed (+3346) 00:09:47.550 QEMU NVMe Ctrl (12341 ): 11709 I/Os completed (+3121) 00:09:47.550 00:09:48.484 QEMU NVMe Ctrl (12340 ): 15925 I/Os completed (+3384) 00:09:48.484 QEMU NVMe Ctrl (12341 ): 14848 I/Os completed (+3139) 00:09:48.484 00:09:49.417 QEMU NVMe Ctrl (12340 ): 19535 I/Os completed (+3610) 00:09:49.417 QEMU NVMe Ctrl (12341 ): 18372 I/Os completed (+3524) 00:09:49.417 00:09:50.347 QEMU NVMe Ctrl (12340 ): 23237 I/Os completed (+3702) 00:09:50.347 QEMU NVMe Ctrl (12341 ): 22071 I/Os completed (+3699) 00:09:50.347 00:09:51.281 QEMU NVMe Ctrl (12340 ): 26836 I/Os completed (+3599) 00:09:51.281 QEMU NVMe Ctrl (12341 ): 25705 I/Os completed (+3634) 00:09:51.281 00:09:52.659 QEMU NVMe Ctrl (12340 ): 30471 I/Os completed (+3635) 00:09:52.659 QEMU NVMe Ctrl (12341 ): 29380 I/Os completed (+3675) 00:09:52.659 00:09:53.602 QEMU NVMe Ctrl (12340 ): 33531 I/Os completed (+3060) 00:09:53.602 QEMU NVMe Ctrl (12341 ): 32438 I/Os completed (+3058) 00:09:53.602 00:09:54.541 QEMU NVMe Ctrl (12340 ): 36792 I/Os completed (+3261) 00:09:54.541 QEMU NVMe Ctrl (12341 ): 35708 I/Os completed (+3270) 00:09:54.541 00:09:55.485 QEMU NVMe Ctrl (12340 ): 39915 I/Os completed (+3123) 00:09:55.485 QEMU NVMe Ctrl (12341 ): 38839 I/Os completed (+3131) 00:09:55.485 00:09:55.745 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:55.745 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:55.745 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:55.745 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:55.745 [2024-12-13 22:50:34.713628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:55.745 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:55.745 [2024-12-13 22:50:34.714924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.745 [2024-12-13 22:50:34.714977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.745 [2024-12-13 22:50:34.714996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.745 [2024-12-13 22:50:34.715014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.745 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:55.745 [2024-12-13 22:50:34.717260] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.745 [2024-12-13 22:50:34.717369] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.745 [2024-12-13 22:50:34.717404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.745 [2024-12-13 22:50:34.717464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.745 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/numa_node 00:09:55.745 EAL: Cannot open sysfs resource 00:09:55.745 EAL: pci_scan_one(): cannot parse resource 00:09:55.745 EAL: Scan for (pci) bus failed. 00:09:55.745 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:55.745 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:55.745 [2024-12-13 22:50:34.739624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:55.745 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:55.746 [2024-12-13 22:50:34.740715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.746 [2024-12-13 22:50:34.740772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.746 [2024-12-13 22:50:34.740791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.746 [2024-12-13 22:50:34.740806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.746 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:55.746 [2024-12-13 22:50:34.742451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.746 [2024-12-13 22:50:34.742490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.746 [2024-12-13 22:50:34.742507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.746 [2024-12-13 22:50:34.742520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.746 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:55.746 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:55.746 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:55.746 EAL: Scan for (pci) bus failed. 00:09:55.746 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:55.746 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:55.746 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:56.007 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:56.007 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:56.007 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:56.007 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:56.007 22:50:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:56.007 Attaching to 0000:00:10.0 00:09:56.007 Attached to 0000:00:10.0 00:09:56.007 22:50:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:56.007 22:50:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:56.007 22:50:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:56.007 Attaching to 0000:00:11.0 00:09:56.007 Attached to 0000:00:11.0 00:09:56.007 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:56.007 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:56.007 [2024-12-13 22:50:35.040576] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:08.260 22:50:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:08.260 22:50:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:08.260 22:50:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.85 00:10:08.260 22:50:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.85 00:10:08.260 22:50:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:08.260 22:50:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.85 00:10:08.260 22:50:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.85 2 00:10:08.260 remove_attach_helper took 42.85s to complete (handling 2 nvme drive(s)) 22:50:47 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:14.845 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68476 00:10:14.845 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68476) - No such process 00:10:14.845 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68476 00:10:14.845 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:14.845 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:14.845 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:14.845 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69024 00:10:14.845 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:14.845 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69024 00:10:14.845 22:50:53 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69024 ']' 00:10:14.845 22:50:53 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.845 22:50:53 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.845 22:50:53 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.845 22:50:53 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.845 22:50:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:14.845 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:14.845 [2024-12-13 22:50:53.120615] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:14.845 [2024-12-13 22:50:53.120737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69024 ] 00:10:14.845 [2024-12-13 22:50:53.287562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.845 [2024-12-13 22:50:53.382983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.107 22:50:53 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.107 22:50:53 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:15.107 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:15.107 22:50:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.107 22:50:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:15.107 22:50:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.107 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:15.107 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:15.107 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:15.107 22:50:53 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:15.107 22:50:53 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:15.107 22:50:53 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:15.107 22:50:53 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:15.107 22:50:53 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:15.107 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:15.107 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:15.107 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:15.107 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:15.107 22:50:53 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:21.721 22:51:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.721 22:51:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:21.721 22:51:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:21.721 [2024-12-13 22:51:00.089095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:21.721 [2024-12-13 22:51:00.090468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.721 [2024-12-13 22:51:00.090506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.721 [2024-12-13 22:51:00.090519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.721 [2024-12-13 22:51:00.090539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.721 [2024-12-13 22:51:00.090546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.721 [2024-12-13 22:51:00.090555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.721 [2024-12-13 22:51:00.090562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.721 [2024-12-13 22:51:00.090570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.721 [2024-12-13 22:51:00.090577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.721 [2024-12-13 22:51:00.090589] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.721 [2024-12-13 22:51:00.090595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.721 [2024-12-13 22:51:00.090604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.721 [2024-12-13 22:51:00.489078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:21.721 [2024-12-13 22:51:00.490322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.721 [2024-12-13 22:51:00.490444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.721 [2024-12-13 22:51:00.490459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.721 [2024-12-13 22:51:00.490476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.721 [2024-12-13 22:51:00.490485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.721 [2024-12-13 22:51:00.490492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.721 [2024-12-13 22:51:00.490501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.721 [2024-12-13 22:51:00.490507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.721 [2024-12-13 22:51:00.490515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.721 [2024-12-13 22:51:00.490522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.721 [2024-12-13 22:51:00.490529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.721 [2024-12-13 22:51:00.490536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:21.721 22:51:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.721 22:51:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:21.721 22:51:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:21.721 22:51:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:33.983 22:51:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.983 22:51:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.983 22:51:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:33.983 22:51:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.983 22:51:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.983 22:51:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:33.983 22:51:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:33.983 [2024-12-13 22:51:12.989254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:33.983 [2024-12-13 22:51:12.990527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.983 [2024-12-13 22:51:12.990628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.983 [2024-12-13 22:51:12.990685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.983 [2024-12-13 22:51:12.990815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.983 [2024-12-13 22:51:12.990836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.983 [2024-12-13 22:51:12.990860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.983 [2024-12-13 22:51:12.990885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.983 [2024-12-13 22:51:12.990903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.983 [2024-12-13 22:51:12.990966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:33.983 [2024-12-13 22:51:12.990994] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.983 [2024-12-13 22:51:12.991010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:33.983 [2024-12-13 22:51:12.991034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.550 [2024-12-13 22:51:13.389266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:34.550 [2024-12-13 22:51:13.391039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.550 [2024-12-13 22:51:13.391147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.550 [2024-12-13 22:51:13.391216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.550 [2024-12-13 22:51:13.391274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.550 [2024-12-13 22:51:13.391295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.550 [2024-12-13 22:51:13.391318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.550 [2024-12-13 22:51:13.391343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.550 [2024-12-13 22:51:13.391392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.550 [2024-12-13 22:51:13.391420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.550 [2024-12-13 22:51:13.391454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.550 [2024-12-13 22:51:13.391473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.550 [2024-12-13 22:51:13.391532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.550 22:51:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.550 22:51:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.550 22:51:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.550 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:34.808 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:34.808 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.808 22:51:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.035 22:51:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.035 22:51:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.035 22:51:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.035 [2024-12-13 22:51:25.789449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:47.035 [2024-12-13 22:51:25.790877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.035 [2024-12-13 22:51:25.790976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.035 [2024-12-13 22:51:25.791039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.035 [2024-12-13 22:51:25.791075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.035 [2024-12-13 22:51:25.791189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.035 [2024-12-13 22:51:25.791220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.035 [2024-12-13 22:51:25.791273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.035 [2024-12-13 22:51:25.791293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.035 [2024-12-13 22:51:25.791317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.035 [2024-12-13 22:51:25.791367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.035 [2024-12-13 22:51:25.791409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.035 [2024-12-13 22:51:25.791464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.035 22:51:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.035 22:51:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.035 22:51:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:47.035 22:51:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:47.294 [2024-12-13 22:51:26.189449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:47.294 [2024-12-13 22:51:26.190606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.294 [2024-12-13 22:51:26.190634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.294 [2024-12-13 22:51:26.190645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.294 [2024-12-13 22:51:26.190656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.294 [2024-12-13 22:51:26.190664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.294 [2024-12-13 22:51:26.190671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.294 [2024-12-13 22:51:26.190680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.294 [2024-12-13 22:51:26.190686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.294 [2024-12-13 22:51:26.190699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.294 [2024-12-13 22:51:26.190706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.294 [2024-12-13 22:51:26.190714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.294 [2024-12-13 22:51:26.190720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.294 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:47.294 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.294 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.294 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.294 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.294 22:51:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.294 22:51:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.294 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.294 22:51:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.294 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:47.294 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:47.552 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.552 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.552 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:47.552 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:47.552 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.552 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.552 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.552 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:47.552 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:47.552 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.552 22:51:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.65 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.65 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.65 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.65 2 00:10:59.753 remove_attach_helper took 44.65s to complete (handling 2 nvme drive(s)) 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.753 22:51:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:59.753 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:59.754 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:59.754 22:51:38 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:59.754 22:51:38 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:59.754 22:51:38 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:59.754 22:51:38 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:59.754 22:51:38 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:59.754 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:59.754 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:59.754 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:59.754 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:59.754 22:51:38 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.312 22:51:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.312 22:51:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.312 22:51:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:06.312 22:51:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:06.312 [2024-12-13 22:51:44.766720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.312 [2024-12-13 22:51:44.767659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.312 [2024-12-13 22:51:44.767694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.312 [2024-12-13 22:51:44.767705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.312 [2024-12-13 22:51:44.767722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.312 [2024-12-13 22:51:44.767730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.312 [2024-12-13 22:51:44.767738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.312 [2024-12-13 22:51:44.767745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.312 [2024-12-13 22:51:44.767753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.312 [2024-12-13 22:51:44.767773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.312 [2024-12-13 22:51:44.767782] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.312 [2024-12-13 22:51:44.767789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.312 [2024-12-13 22:51:44.767800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.312 [2024-12-13 22:51:45.166711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:06.312 [2024-12-13 22:51:45.167593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.312 [2024-12-13 22:51:45.167620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.312 [2024-12-13 22:51:45.167631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.312 [2024-12-13 22:51:45.167643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.312 [2024-12-13 22:51:45.167651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.313 [2024-12-13 22:51:45.167658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.313 [2024-12-13 22:51:45.167667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.313 [2024-12-13 22:51:45.167673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.313 [2024-12-13 22:51:45.167681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.313 [2024-12-13 22:51:45.167688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.313 [2024-12-13 22:51:45.167695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.313 [2024-12-13 22:51:45.167702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.313 22:51:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.313 22:51:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.313 22:51:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:06.313 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:06.570 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.570 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.570 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.570 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:06.570 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:06.570 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.570 22:51:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:18.770 22:51:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.770 22:51:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.770 22:51:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:18.770 22:51:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.770 22:51:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.770 22:51:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:18.770 22:51:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:18.770 [2024-12-13 22:51:57.666917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:18.770 [2024-12-13 22:51:57.667896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.770 [2024-12-13 22:51:57.668000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.771 [2024-12-13 22:51:57.668059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.771 [2024-12-13 22:51:57.668097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.771 [2024-12-13 22:51:57.668232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.771 [2024-12-13 22:51:57.668283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.771 [2024-12-13 22:51:57.668307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.771 [2024-12-13 22:51:57.668324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.771 [2024-12-13 22:51:57.668347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.771 [2024-12-13 22:51:57.668374] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.771 [2024-12-13 22:51:57.668390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.771 [2024-12-13 22:51:57.668454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.028 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:19.028 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.028 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.028 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.028 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.028 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.028 22:51:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.028 22:51:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.028 22:51:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.028 [2024-12-13 22:51:58.166919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:19.286 [2024-12-13 22:51:58.167867] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.286 [2024-12-13 22:51:58.167962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.286 [2024-12-13 22:51:58.168024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.286 [2024-12-13 22:51:58.168055] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.286 [2024-12-13 22:51:58.168184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.286 [2024-12-13 22:51:58.168238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.286 [2024-12-13 22:51:58.168263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.286 [2024-12-13 22:51:58.168279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.286 [2024-12-13 22:51:58.168303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.286 [2024-12-13 22:51:58.168360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.286 [2024-12-13 22:51:58.168381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.286 [2024-12-13 22:51:58.168404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.286 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:19.286 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:19.545 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:19.545 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.545 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.803 22:51:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.803 22:51:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.803 22:51:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.803 22:51:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.018 22:52:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.018 22:52:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.018 22:52:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.018 22:52:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.018 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:32.018 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.018 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.018 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.018 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.018 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.018 22:52:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.018 22:52:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.018 22:52:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.018 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:32.019 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:32.019 [2024-12-13 22:52:11.067133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:32.019 [2024-12-13 22:52:11.068273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.019 [2024-12-13 22:52:11.068322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.019 [2024-12-13 22:52:11.068339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.019 [2024-12-13 22:52:11.068363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.019 [2024-12-13 22:52:11.068375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.019 [2024-12-13 22:52:11.068388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.019 [2024-12-13 22:52:11.068401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.019 [2024-12-13 22:52:11.068417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.019 [2024-12-13 22:52:11.068429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.019 [2024-12-13 22:52:11.068444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.019 [2024-12-13 22:52:11.068455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.019 [2024-12-13 22:52:11.068468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.594 [2024-12-13 22:52:11.467126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:32.594 [2024-12-13 22:52:11.468032] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.594 [2024-12-13 22:52:11.468062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.594 [2024-12-13 22:52:11.468074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.594 [2024-12-13 22:52:11.468088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.594 [2024-12-13 22:52:11.468097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.594 [2024-12-13 22:52:11.468104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.594 [2024-12-13 22:52:11.468114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.594 [2024-12-13 22:52:11.468121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.594 [2024-12-13 22:52:11.468129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.594 [2024-12-13 22:52:11.468136] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.594 [2024-12-13 22:52:11.468145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.594 [2024-12-13 22:52:11.468152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.594 22:52:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.594 22:52:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.594 22:52:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:32.594 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:32.852 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.853 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.853 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.853 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:32.853 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:32.853 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.853 22:52:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:45.052 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:45.052 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:45.052 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:45.052 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.052 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.052 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.052 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:45.052 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.18 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.18 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:45.052 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:11:45.052 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:11:45.052 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:45.052 22:52:23 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69024 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69024 ']' 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69024 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69024 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.052 22:52:23 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69024' 00:11:45.053 killing process with pid 69024 00:11:45.053 22:52:23 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69024 00:11:45.053 22:52:23 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69024 00:11:45.990 22:52:25 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:46.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:46.818 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.818 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:46.818 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:47.082 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:47.082 00:11:47.082 real 2m29.107s 00:11:47.082 user 1m50.964s 00:11:47.082 sys 0m16.818s 00:11:47.082 22:52:26 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.082 ************************************ 00:11:47.082 22:52:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.082 END TEST sw_hotplug 00:11:47.082 ************************************ 00:11:47.082 22:52:26 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:47.082 22:52:26 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:47.082 22:52:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.082 22:52:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.082 22:52:26 -- common/autotest_common.sh@10 -- # set +x 00:11:47.082 ************************************ 00:11:47.082 START TEST nvme_xnvme 00:11:47.082 ************************************ 00:11:47.082 22:52:26 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:47.082 * Looking for test storage... 00:11:47.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:47.082 22:52:26 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:47.082 22:52:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:47.082 22:52:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:47.082 22:52:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.082 22:52:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.346 22:52:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:47.346 22:52:26 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.346 22:52:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:47.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.346 --rc genhtml_branch_coverage=1 00:11:47.346 --rc genhtml_function_coverage=1 00:11:47.346 --rc genhtml_legend=1 00:11:47.346 --rc geninfo_all_blocks=1 00:11:47.346 --rc geninfo_unexecuted_blocks=1 00:11:47.346 00:11:47.346 ' 00:11:47.346 22:52:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:47.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.346 --rc genhtml_branch_coverage=1 00:11:47.346 --rc genhtml_function_coverage=1 00:11:47.346 --rc genhtml_legend=1 00:11:47.346 --rc geninfo_all_blocks=1 00:11:47.346 --rc geninfo_unexecuted_blocks=1 00:11:47.346 00:11:47.346 ' 00:11:47.346 22:52:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:47.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.346 --rc genhtml_branch_coverage=1 00:11:47.346 --rc genhtml_function_coverage=1 00:11:47.346 --rc genhtml_legend=1 00:11:47.346 --rc geninfo_all_blocks=1 00:11:47.346 --rc geninfo_unexecuted_blocks=1 00:11:47.346 00:11:47.346 ' 00:11:47.346 22:52:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:47.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.346 --rc genhtml_branch_coverage=1 00:11:47.346 --rc genhtml_function_coverage=1 00:11:47.346 --rc genhtml_legend=1 00:11:47.347 --rc geninfo_all_blocks=1 00:11:47.347 --rc geninfo_unexecuted_blocks=1 00:11:47.347 00:11:47.347 ' 00:11:47.347 22:52:26 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:47.347 22:52:26 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:47.347 22:52:26 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:47.347 22:52:26 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:47.347 22:52:26 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:47.347 22:52:26 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:47.347 22:52:26 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:47.347 22:52:26 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:47.347 22:52:26 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:47.347 22:52:26 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:47.347 22:52:26 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:47.347 22:52:26 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:47.348 #define SPDK_CONFIG_H 00:11:47.348 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:47.348 #define SPDK_CONFIG_APPS 1 00:11:47.348 #define SPDK_CONFIG_ARCH native 00:11:47.348 #define SPDK_CONFIG_ASAN 1 00:11:47.348 #undef SPDK_CONFIG_AVAHI 00:11:47.348 #undef SPDK_CONFIG_CET 00:11:47.348 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:47.348 #define SPDK_CONFIG_COVERAGE 1 00:11:47.348 #define SPDK_CONFIG_CROSS_PREFIX 00:11:47.348 #undef SPDK_CONFIG_CRYPTO 00:11:47.348 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:47.348 #undef SPDK_CONFIG_CUSTOMOCF 00:11:47.348 #undef SPDK_CONFIG_DAOS 00:11:47.348 #define SPDK_CONFIG_DAOS_DIR 00:11:47.348 #define SPDK_CONFIG_DEBUG 1 00:11:47.348 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:47.348 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:47.348 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:47.348 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:47.348 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:47.348 #undef SPDK_CONFIG_DPDK_UADK 00:11:47.348 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:47.348 #define SPDK_CONFIG_EXAMPLES 1 00:11:47.348 #undef SPDK_CONFIG_FC 00:11:47.348 #define SPDK_CONFIG_FC_PATH 00:11:47.348 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:47.348 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:47.348 #define SPDK_CONFIG_FSDEV 1 00:11:47.348 #undef SPDK_CONFIG_FUSE 00:11:47.348 #undef SPDK_CONFIG_FUZZER 00:11:47.348 #define SPDK_CONFIG_FUZZER_LIB 00:11:47.348 #undef SPDK_CONFIG_GOLANG 00:11:47.348 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:47.348 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:47.348 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:47.348 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:47.348 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:47.348 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:47.348 #undef SPDK_CONFIG_HAVE_LZ4 00:11:47.348 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:47.348 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:47.348 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:47.348 #define SPDK_CONFIG_IDXD 1 00:11:47.348 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:47.348 #undef SPDK_CONFIG_IPSEC_MB 00:11:47.348 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:47.348 #define SPDK_CONFIG_ISAL 1 00:11:47.348 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:47.348 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:47.348 #define SPDK_CONFIG_LIBDIR 00:11:47.348 #undef SPDK_CONFIG_LTO 00:11:47.348 #define SPDK_CONFIG_MAX_LCORES 128 00:11:47.348 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:47.348 #define SPDK_CONFIG_NVME_CUSE 1 00:11:47.348 #undef SPDK_CONFIG_OCF 00:11:47.348 #define SPDK_CONFIG_OCF_PATH 00:11:47.348 #define SPDK_CONFIG_OPENSSL_PATH 00:11:47.348 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:47.348 #define SPDK_CONFIG_PGO_DIR 00:11:47.348 #undef SPDK_CONFIG_PGO_USE 00:11:47.348 #define SPDK_CONFIG_PREFIX /usr/local 00:11:47.348 #undef SPDK_CONFIG_RAID5F 00:11:47.348 #undef SPDK_CONFIG_RBD 00:11:47.348 #define SPDK_CONFIG_RDMA 1 00:11:47.348 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:47.348 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:47.348 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:47.348 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:47.348 #define SPDK_CONFIG_SHARED 1 00:11:47.348 #undef SPDK_CONFIG_SMA 00:11:47.348 #define SPDK_CONFIG_TESTS 1 00:11:47.348 #undef SPDK_CONFIG_TSAN 00:11:47.348 #define SPDK_CONFIG_UBLK 1 00:11:47.348 #define SPDK_CONFIG_UBSAN 1 00:11:47.348 #undef SPDK_CONFIG_UNIT_TESTS 00:11:47.348 #undef SPDK_CONFIG_URING 00:11:47.348 #define SPDK_CONFIG_URING_PATH 00:11:47.348 #undef SPDK_CONFIG_URING_ZNS 00:11:47.348 #undef SPDK_CONFIG_USDT 00:11:47.348 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:47.348 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:47.348 #undef SPDK_CONFIG_VFIO_USER 00:11:47.348 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:47.348 #define SPDK_CONFIG_VHOST 1 00:11:47.348 #define SPDK_CONFIG_VIRTIO 1 00:11:47.348 #undef SPDK_CONFIG_VTUNE 00:11:47.348 #define SPDK_CONFIG_VTUNE_DIR 00:11:47.348 #define SPDK_CONFIG_WERROR 1 00:11:47.348 #define SPDK_CONFIG_WPDK_DIR 00:11:47.348 #define SPDK_CONFIG_XNVME 1 00:11:47.348 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:47.348 22:52:26 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:47.348 22:52:26 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.348 22:52:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.348 22:52:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.348 22:52:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.348 22:52:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.348 22:52:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.348 22:52:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.348 22:52:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.348 22:52:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:47.348 22:52:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.348 22:52:26 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:47.348 22:52:26 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:47.349 22:52:26 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:47.349 22:52:26 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:47.349 22:52:26 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:47.349 22:52:26 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:47.349 22:52:26 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:47.349 22:52:26 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:47.349 22:52:26 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:47.349 22:52:26 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:47.349 22:52:26 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:47.349 22:52:26 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:47.349 22:52:26 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70372 ]] 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70372 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:47.350 22:52:26 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.YWnCJF 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.YWnCJF/tests/xnvme /tmp/spdk.YWnCJF 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971664896 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596581888 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971664896 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596581888 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98047959040 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1654820864 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:47.351 * Looking for test storage... 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13971664896 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:47.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:47.351 22:52:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:47.351 22:52:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.352 22:52:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.352 22:52:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.352 22:52:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:47.352 22:52:26 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.352 22:52:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:47.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.352 --rc genhtml_branch_coverage=1 00:11:47.352 --rc genhtml_function_coverage=1 00:11:47.352 --rc genhtml_legend=1 00:11:47.352 --rc geninfo_all_blocks=1 00:11:47.352 --rc geninfo_unexecuted_blocks=1 00:11:47.352 00:11:47.352 ' 00:11:47.352 22:52:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:47.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.352 --rc genhtml_branch_coverage=1 00:11:47.352 --rc genhtml_function_coverage=1 00:11:47.352 --rc genhtml_legend=1 00:11:47.352 --rc geninfo_all_blocks=1 00:11:47.352 --rc geninfo_unexecuted_blocks=1 00:11:47.352 00:11:47.352 ' 00:11:47.352 22:52:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:47.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.352 --rc genhtml_branch_coverage=1 00:11:47.352 --rc genhtml_function_coverage=1 00:11:47.352 --rc genhtml_legend=1 00:11:47.352 --rc geninfo_all_blocks=1 00:11:47.352 --rc geninfo_unexecuted_blocks=1 00:11:47.352 00:11:47.352 ' 00:11:47.352 22:52:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:47.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.352 --rc genhtml_branch_coverage=1 00:11:47.352 --rc genhtml_function_coverage=1 00:11:47.352 --rc genhtml_legend=1 00:11:47.352 --rc geninfo_all_blocks=1 00:11:47.352 --rc geninfo_unexecuted_blocks=1 00:11:47.352 00:11:47.352 ' 00:11:47.352 22:52:26 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.352 22:52:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.352 22:52:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.352 22:52:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.352 22:52:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.352 22:52:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.352 22:52:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.352 22:52:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.352 22:52:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:47.352 22:52:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:47.352 22:52:26 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:47.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:47.904 Waiting for block devices as requested 00:11:47.904 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.904 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:48.165 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:48.165 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:53.450 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:53.450 22:52:32 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:11:53.710 22:52:32 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:11:53.710 22:52:32 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:11:53.972 22:52:32 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:53.972 22:52:32 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:53.972 No valid GPT data, bailing 00:11:53.972 22:52:32 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:53.972 22:52:32 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:11:53.972 22:52:32 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:11:53.972 22:52:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:11:53.972 22:52:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:53.972 22:52:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.972 22:52:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:53.972 ************************************ 00:11:53.972 START TEST xnvme_rpc 00:11:53.972 ************************************ 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:11:53.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70769 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70769 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70769 ']' 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:53.972 22:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.972 [2024-12-13 22:52:33.048573] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:53.972 [2024-12-13 22:52:33.049405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70769 ] 00:11:54.234 [2024-12-13 22:52:33.211481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.234 [2024-12-13 22:52:33.331623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.179 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.179 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:55.179 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:11:55.179 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.179 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.179 xnvme_bdev 00:11:55.179 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.179 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70769 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70769 ']' 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70769 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70769 00:11:55.180 killing process with pid 70769 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70769' 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70769 00:11:55.180 22:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70769 00:11:57.094 00:11:57.094 real 0m2.925s 00:11:57.094 user 0m2.923s 00:11:57.094 sys 0m0.460s 00:11:57.094 ************************************ 00:11:57.094 END TEST xnvme_rpc 00:11:57.094 ************************************ 00:11:57.094 22:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.094 22:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.094 22:52:35 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:57.094 22:52:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:57.094 22:52:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.094 22:52:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:57.094 ************************************ 00:11:57.094 START TEST xnvme_bdevperf 00:11:57.094 ************************************ 00:11:57.094 22:52:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:11:57.094 22:52:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:11:57.094 22:52:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:11:57.094 22:52:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:11:57.094 22:52:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:11:57.094 22:52:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:11:57.094 22:52:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:57.094 22:52:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:57.094 { 00:11:57.094 "subsystems": [ 00:11:57.094 { 00:11:57.094 "subsystem": "bdev", 00:11:57.094 "config": [ 00:11:57.094 { 00:11:57.094 "params": { 00:11:57.094 "io_mechanism": "libaio", 00:11:57.094 "conserve_cpu": false, 00:11:57.094 "filename": "/dev/nvme0n1", 00:11:57.094 "name": "xnvme_bdev" 00:11:57.094 }, 00:11:57.094 "method": "bdev_xnvme_create" 00:11:57.094 }, 00:11:57.094 { 00:11:57.094 "method": "bdev_wait_for_examine" 00:11:57.094 } 00:11:57.094 ] 00:11:57.094 } 00:11:57.094 ] 00:11:57.094 } 00:11:57.094 [2024-12-13 22:52:36.022256] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:57.094 [2024-12-13 22:52:36.022577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70838 ] 00:11:57.094 [2024-12-13 22:52:36.189585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.355 [2024-12-13 22:52:36.310373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.615 Running I/O for 5 seconds... 00:11:59.498 25116.00 IOPS, 98.11 MiB/s [2024-12-13T22:52:40.047Z] 25548.00 IOPS, 99.80 MiB/s [2024-12-13T22:52:40.990Z] 25321.67 IOPS, 98.91 MiB/s [2024-12-13T22:52:41.934Z] 24948.75 IOPS, 97.46 MiB/s [2024-12-13T22:52:41.934Z] 24992.80 IOPS, 97.63 MiB/s 00:12:02.794 Latency(us) 00:12:02.794 [2024-12-13T22:52:41.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.794 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:02.794 xnvme_bdev : 5.01 24979.58 97.58 0.00 0.00 2556.70 456.86 7057.72 00:12:02.794 [2024-12-13T22:52:41.934Z] =================================================================================================================== 00:12:02.794 [2024-12-13T22:52:41.934Z] Total : 24979.58 97.58 0.00 0.00 2556.70 456.86 7057.72 00:12:03.367 22:52:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:03.367 22:52:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:03.367 22:52:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:03.367 22:52:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:03.367 22:52:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:03.367 { 00:12:03.367 "subsystems": [ 00:12:03.367 { 00:12:03.367 "subsystem": "bdev", 00:12:03.367 "config": [ 00:12:03.367 { 00:12:03.367 "params": { 00:12:03.367 "io_mechanism": "libaio", 00:12:03.367 "conserve_cpu": false, 00:12:03.367 "filename": "/dev/nvme0n1", 00:12:03.367 "name": "xnvme_bdev" 00:12:03.367 }, 00:12:03.367 "method": "bdev_xnvme_create" 00:12:03.367 }, 00:12:03.367 { 00:12:03.367 "method": "bdev_wait_for_examine" 00:12:03.367 } 00:12:03.367 ] 00:12:03.367 } 00:12:03.367 ] 00:12:03.367 } 00:12:03.627 [2024-12-13 22:52:42.507034] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:03.627 [2024-12-13 22:52:42.507178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70914 ] 00:12:03.627 [2024-12-13 22:52:42.666020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.888 [2024-12-13 22:52:42.788311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.149 Running I/O for 5 seconds... 00:12:06.035 34836.00 IOPS, 136.08 MiB/s [2024-12-13T22:52:46.115Z] 35545.00 IOPS, 138.85 MiB/s [2024-12-13T22:52:47.502Z] 35237.67 IOPS, 137.65 MiB/s [2024-12-13T22:52:48.446Z] 35535.50 IOPS, 138.81 MiB/s [2024-12-13T22:52:48.446Z] 35313.80 IOPS, 137.94 MiB/s 00:12:09.306 Latency(us) 00:12:09.306 [2024-12-13T22:52:48.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.306 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:09.306 xnvme_bdev : 5.01 35285.43 137.83 0.00 0.00 1809.42 146.51 10989.88 00:12:09.306 [2024-12-13T22:52:48.446Z] =================================================================================================================== 00:12:09.306 [2024-12-13T22:52:48.446Z] Total : 35285.43 137.83 0.00 0.00 1809.42 146.51 10989.88 00:12:09.880 00:12:09.880 real 0m12.964s 00:12:09.880 user 0m4.998s 00:12:09.880 sys 0m6.362s 00:12:09.880 ************************************ 00:12:09.880 END TEST xnvme_bdevperf 00:12:09.880 ************************************ 00:12:09.880 22:52:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.880 22:52:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:09.880 22:52:48 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:09.880 22:52:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.880 22:52:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.880 22:52:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:09.880 ************************************ 00:12:09.880 START TEST xnvme_fio_plugin 00:12:09.880 ************************************ 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:09.880 22:52:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:09.880 22:52:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:09.880 22:52:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:09.880 22:52:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:09.880 22:52:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:09.880 22:52:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:09.880 { 00:12:09.880 "subsystems": [ 00:12:09.880 { 00:12:09.880 "subsystem": "bdev", 00:12:09.880 "config": [ 00:12:09.880 { 00:12:09.880 "params": { 00:12:09.880 "io_mechanism": "libaio", 00:12:09.880 "conserve_cpu": false, 00:12:09.880 "filename": "/dev/nvme0n1", 00:12:09.880 "name": "xnvme_bdev" 00:12:09.880 }, 00:12:09.880 "method": "bdev_xnvme_create" 00:12:09.880 }, 00:12:09.880 { 00:12:09.880 "method": "bdev_wait_for_examine" 00:12:09.880 } 00:12:09.880 ] 00:12:09.880 } 00:12:09.880 ] 00:12:09.880 } 00:12:10.142 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:10.142 fio-3.35 00:12:10.142 Starting 1 thread 00:12:16.732 00:12:16.733 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71033: Fri Dec 13 22:52:54 2024 00:12:16.733 read: IOPS=34.9k, BW=136MiB/s (143MB/s)(683MiB/5002msec) 00:12:16.733 slat (usec): min=4, max=2282, avg=20.53, stdev=86.21 00:12:16.733 clat (usec): min=106, max=5711, avg=1269.47, stdev=546.33 00:12:16.733 lat (usec): min=187, max=5717, avg=1290.00, stdev=540.37 00:12:16.733 clat percentiles (usec): 00:12:16.733 | 1.00th=[ 265], 5.00th=[ 445], 10.00th=[ 603], 20.00th=[ 807], 00:12:16.733 | 30.00th=[ 963], 40.00th=[ 1106], 50.00th=[ 1237], 60.00th=[ 1369], 00:12:16.733 | 70.00th=[ 1516], 80.00th=[ 1680], 90.00th=[ 1942], 95.00th=[ 2180], 00:12:16.733 | 99.00th=[ 2900], 99.50th=[ 3261], 99.90th=[ 3884], 99.95th=[ 4178], 00:12:16.733 | 99.99th=[ 4686] 00:12:16.733 bw ( KiB/s): min=126200, max=150736, per=100.00%, avg=139768.00, stdev=6843.14, samples=9 00:12:16.733 iops : min=31550, max=37684, avg=34942.00, stdev=1710.78, samples=9 00:12:16.733 lat (usec) : 250=0.82%, 500=5.68%, 750=10.11%, 1000=16.10% 00:12:16.733 lat (msec) : 2=59.02%, 4=8.19%, 10=0.08% 00:12:16.733 cpu : usr=42.51%, sys=48.05%, ctx=18, majf=0, minf=764 00:12:16.733 IO depths : 1=0.4%, 2=1.2%, 4=3.1%, 8=8.6%, 16=23.7%, 32=60.8%, >=64=2.1% 00:12:16.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.733 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:12:16.733 issued rwts: total=174770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.733 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:16.733 00:12:16.733 Run status group 0 (all jobs): 00:12:16.733 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=683MiB (716MB), run=5002-5002msec 00:12:16.733 ----------------------------------------------------- 00:12:16.733 Suppressions used: 00:12:16.733 count bytes template 00:12:16.733 1 11 /usr/src/fio/parse.c 00:12:16.733 1 8 libtcmalloc_minimal.so 00:12:16.733 1 904 libcrypto.so 00:12:16.733 ----------------------------------------------------- 00:12:16.733 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:16.733 22:52:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:16.733 { 00:12:16.733 "subsystems": [ 00:12:16.733 { 00:12:16.733 "subsystem": "bdev", 00:12:16.733 "config": [ 00:12:16.733 { 00:12:16.733 "params": { 00:12:16.733 "io_mechanism": "libaio", 00:12:16.733 "conserve_cpu": false, 00:12:16.733 "filename": "/dev/nvme0n1", 00:12:16.733 "name": "xnvme_bdev" 00:12:16.733 }, 00:12:16.733 "method": "bdev_xnvme_create" 00:12:16.733 }, 00:12:16.733 { 00:12:16.733 "method": "bdev_wait_for_examine" 00:12:16.733 } 00:12:16.733 ] 00:12:16.733 } 00:12:16.733 ] 00:12:16.733 } 00:12:16.994 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:16.994 fio-3.35 00:12:16.994 Starting 1 thread 00:12:23.579 00:12:23.579 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71125: Fri Dec 13 22:53:01 2024 00:12:23.579 write: IOPS=39.4k, BW=154MiB/s (161MB/s)(769MiB/5001msec); 0 zone resets 00:12:23.579 slat (usec): min=4, max=2027, avg=20.92, stdev=63.11 00:12:23.579 clat (usec): min=64, max=9821, avg=1044.54, stdev=568.91 00:12:23.579 lat (usec): min=89, max=9826, avg=1065.46, stdev=567.29 00:12:23.579 clat percentiles (usec): 00:12:23.579 | 1.00th=[ 215], 5.00th=[ 322], 10.00th=[ 424], 20.00th=[ 586], 00:12:23.579 | 30.00th=[ 725], 40.00th=[ 848], 50.00th=[ 963], 60.00th=[ 1090], 00:12:23.579 | 70.00th=[ 1221], 80.00th=[ 1418], 90.00th=[ 1713], 95.00th=[ 2057], 00:12:23.579 | 99.00th=[ 2835], 99.50th=[ 3228], 99.90th=[ 5080], 99.95th=[ 6325], 00:12:23.579 | 99.99th=[ 8455] 00:12:23.579 bw ( KiB/s): min=138672, max=169672, per=100.00%, avg=157664.33, stdev=9878.51, samples=9 00:12:23.579 iops : min=34668, max=42418, avg=39416.00, stdev=2469.62, samples=9 00:12:23.579 lat (usec) : 100=0.01%, 250=2.03%, 500=12.23%, 750=17.76%, 1000=21.06% 00:12:23.579 lat (msec) : 2=41.29%, 4=5.47%, 10=0.16% 00:12:23.579 cpu : usr=32.64%, sys=53.88%, ctx=243, majf=0, minf=765 00:12:23.579 IO depths : 1=0.2%, 2=0.7%, 4=2.8%, 8=9.2%, 16=25.1%, 32=60.0%, >=64=2.0% 00:12:23.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.579 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:23.579 issued rwts: total=0,196876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.579 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.579 00:12:23.579 Run status group 0 (all jobs): 00:12:23.579 WRITE: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=769MiB (806MB), run=5001-5001msec 00:12:23.579 ----------------------------------------------------- 00:12:23.579 Suppressions used: 00:12:23.579 count bytes template 00:12:23.579 1 11 /usr/src/fio/parse.c 00:12:23.579 1 8 libtcmalloc_minimal.so 00:12:23.579 1 904 libcrypto.so 00:12:23.579 ----------------------------------------------------- 00:12:23.579 00:12:23.579 ************************************ 00:12:23.579 END TEST xnvme_fio_plugin 00:12:23.579 ************************************ 00:12:23.579 00:12:23.579 real 0m13.589s 00:12:23.579 user 0m6.393s 00:12:23.579 sys 0m5.671s 00:12:23.579 22:53:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.579 22:53:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:23.579 22:53:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:23.579 22:53:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:23.579 22:53:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:23.579 22:53:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:23.579 22:53:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.579 22:53:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.579 22:53:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.579 ************************************ 00:12:23.579 START TEST xnvme_rpc 00:12:23.579 ************************************ 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71211 00:12:23.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71211 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71211 ']' 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:23.579 22:53:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.839 [2024-12-13 22:53:02.734411] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:23.839 [2024-12-13 22:53:02.734793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71211 ] 00:12:23.839 [2024-12-13 22:53:02.910949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.100 [2024-12-13 22:53:03.011969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.670 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.670 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:24.670 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:24.670 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.670 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.670 xnvme_bdev 00:12:24.670 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.670 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:24.670 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.670 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.670 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.670 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.931 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71211 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71211 ']' 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71211 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71211 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.932 killing process with pid 71211 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71211' 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71211 00:12:24.932 22:53:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71211 00:12:26.846 ************************************ 00:12:26.846 END TEST xnvme_rpc 00:12:26.846 ************************************ 00:12:26.846 00:12:26.846 real 0m2.829s 00:12:26.846 user 0m2.945s 00:12:26.846 sys 0m0.455s 00:12:26.846 22:53:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.846 22:53:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.846 22:53:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:26.846 22:53:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.846 22:53:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.846 22:53:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.846 ************************************ 00:12:26.846 START TEST xnvme_bdevperf 00:12:26.846 ************************************ 00:12:26.846 22:53:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:26.847 22:53:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:26.847 22:53:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:26.847 22:53:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:26.847 22:53:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:26.847 22:53:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:26.847 22:53:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:26.847 22:53:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:26.847 { 00:12:26.847 "subsystems": [ 00:12:26.847 { 00:12:26.847 "subsystem": "bdev", 00:12:26.847 "config": [ 00:12:26.847 { 00:12:26.847 "params": { 00:12:26.847 "io_mechanism": "libaio", 00:12:26.847 "conserve_cpu": true, 00:12:26.847 "filename": "/dev/nvme0n1", 00:12:26.847 "name": "xnvme_bdev" 00:12:26.847 }, 00:12:26.847 "method": "bdev_xnvme_create" 00:12:26.847 }, 00:12:26.847 { 00:12:26.847 "method": "bdev_wait_for_examine" 00:12:26.847 } 00:12:26.847 ] 00:12:26.847 } 00:12:26.847 ] 00:12:26.847 } 00:12:26.847 [2024-12-13 22:53:05.588024] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:26.847 [2024-12-13 22:53:05.588169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71284 ] 00:12:26.847 [2024-12-13 22:53:05.753275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.847 [2024-12-13 22:53:05.846434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.108 Running I/O for 5 seconds... 00:12:28.993 36705.00 IOPS, 143.38 MiB/s [2024-12-13T22:53:09.132Z] 37683.00 IOPS, 147.20 MiB/s [2024-12-13T22:53:10.518Z] 37279.00 IOPS, 145.62 MiB/s [2024-12-13T22:53:11.462Z] 36625.25 IOPS, 143.07 MiB/s [2024-12-13T22:53:11.462Z] 36014.80 IOPS, 140.68 MiB/s 00:12:32.322 Latency(us) 00:12:32.322 [2024-12-13T22:53:11.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.322 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:32.322 xnvme_bdev : 5.01 35986.07 140.57 0.00 0.00 1774.10 425.35 9376.69 00:12:32.322 [2024-12-13T22:53:11.462Z] =================================================================================================================== 00:12:32.322 [2024-12-13T22:53:11.462Z] Total : 35986.07 140.57 0.00 0.00 1774.10 425.35 9376.69 00:12:32.894 22:53:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:32.894 22:53:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:32.894 22:53:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:32.894 22:53:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:32.894 22:53:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:32.894 { 00:12:32.894 "subsystems": [ 00:12:32.894 { 00:12:32.894 "subsystem": "bdev", 00:12:32.894 "config": [ 00:12:32.894 { 00:12:32.894 "params": { 00:12:32.894 "io_mechanism": "libaio", 00:12:32.894 "conserve_cpu": true, 00:12:32.894 "filename": "/dev/nvme0n1", 00:12:32.894 "name": "xnvme_bdev" 00:12:32.894 }, 00:12:32.894 "method": "bdev_xnvme_create" 00:12:32.894 }, 00:12:32.894 { 00:12:32.894 "method": "bdev_wait_for_examine" 00:12:32.894 } 00:12:32.894 ] 00:12:32.894 } 00:12:32.894 ] 00:12:32.894 } 00:12:32.894 [2024-12-13 22:53:12.001409] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:32.894 [2024-12-13 22:53:12.001553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71355 ] 00:12:33.155 [2024-12-13 22:53:12.166194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.155 [2024-12-13 22:53:12.288501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.724 Running I/O for 5 seconds... 00:12:35.610 28961.00 IOPS, 113.13 MiB/s [2024-12-13T22:53:15.692Z] 32477.50 IOPS, 126.87 MiB/s [2024-12-13T22:53:16.635Z] 33446.00 IOPS, 130.65 MiB/s [2024-12-13T22:53:18.019Z] 34139.00 IOPS, 133.36 MiB/s [2024-12-13T22:53:18.019Z] 34528.60 IOPS, 134.88 MiB/s 00:12:38.879 Latency(us) 00:12:38.879 [2024-12-13T22:53:18.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.879 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:38.879 xnvme_bdev : 5.00 34512.71 134.82 0.00 0.00 1850.04 100.82 166965.56 00:12:38.879 [2024-12-13T22:53:18.019Z] =================================================================================================================== 00:12:38.879 [2024-12-13T22:53:18.019Z] Total : 34512.71 134.82 0.00 0.00 1850.04 100.82 166965.56 00:12:39.450 00:12:39.450 real 0m12.884s 00:12:39.450 user 0m5.162s 00:12:39.450 sys 0m5.904s 00:12:39.450 22:53:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.450 22:53:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:39.450 ************************************ 00:12:39.450 END TEST xnvme_bdevperf 00:12:39.450 ************************************ 00:12:39.450 22:53:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:39.450 22:53:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:39.450 22:53:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.450 22:53:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.450 ************************************ 00:12:39.450 START TEST xnvme_fio_plugin 00:12:39.450 ************************************ 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.450 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:39.451 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:39.451 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:39.451 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:39.451 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:39.451 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:39.451 22:53:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.451 { 00:12:39.451 "subsystems": [ 00:12:39.451 { 00:12:39.451 "subsystem": "bdev", 00:12:39.451 "config": [ 00:12:39.451 { 00:12:39.451 "params": { 00:12:39.451 "io_mechanism": "libaio", 00:12:39.451 "conserve_cpu": true, 00:12:39.451 "filename": "/dev/nvme0n1", 00:12:39.451 "name": "xnvme_bdev" 00:12:39.451 }, 00:12:39.451 "method": "bdev_xnvme_create" 00:12:39.451 }, 00:12:39.451 { 00:12:39.451 "method": "bdev_wait_for_examine" 00:12:39.451 } 00:12:39.451 ] 00:12:39.451 } 00:12:39.451 ] 00:12:39.451 } 00:12:39.711 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:39.711 fio-3.35 00:12:39.711 Starting 1 thread 00:12:46.338 00:12:46.338 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71474: Fri Dec 13 22:53:24 2024 00:12:46.338 read: IOPS=34.7k, BW=136MiB/s (142MB/s)(679MiB/5002msec) 00:12:46.338 slat (usec): min=4, max=2127, avg=19.69, stdev=87.64 00:12:46.338 clat (usec): min=105, max=6693, avg=1308.03, stdev=515.16 00:12:46.338 lat (usec): min=180, max=6698, avg=1327.71, stdev=507.91 00:12:46.338 clat percentiles (usec): 00:12:46.338 | 1.00th=[ 281], 5.00th=[ 519], 10.00th=[ 676], 20.00th=[ 881], 00:12:46.338 | 30.00th=[ 1037], 40.00th=[ 1172], 50.00th=[ 1287], 60.00th=[ 1401], 00:12:46.338 | 70.00th=[ 1532], 80.00th=[ 1680], 90.00th=[ 1942], 95.00th=[ 2180], 00:12:46.338 | 99.00th=[ 2835], 99.50th=[ 3097], 99.90th=[ 3621], 99.95th=[ 3785], 00:12:46.338 | 99.99th=[ 4178] 00:12:46.338 bw ( KiB/s): min=134328, max=144792, per=99.99%, avg=138908.44, stdev=3225.97, samples=9 00:12:46.338 iops : min=33582, max=36198, avg=34727.11, stdev=806.49, samples=9 00:12:46.338 lat (usec) : 250=0.67%, 500=3.95%, 750=8.49%, 1000=14.27% 00:12:46.338 lat (msec) : 2=64.35%, 4=8.27%, 10=0.02% 00:12:46.338 cpu : usr=46.19%, sys=45.37%, ctx=9, majf=0, minf=764 00:12:46.338 IO depths : 1=0.6%, 2=1.4%, 4=3.4%, 8=9.1%, 16=23.4%, 32=60.0%, >=64=2.0% 00:12:46.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.338 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:12:46.338 issued rwts: total=173720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:46.338 00:12:46.338 Run status group 0 (all jobs): 00:12:46.338 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=679MiB (712MB), run=5002-5002msec 00:12:46.338 ----------------------------------------------------- 00:12:46.338 Suppressions used: 00:12:46.338 count bytes template 00:12:46.338 1 11 /usr/src/fio/parse.c 00:12:46.338 1 8 libtcmalloc_minimal.so 00:12:46.338 1 904 libcrypto.so 00:12:46.338 ----------------------------------------------------- 00:12:46.338 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:46.338 22:53:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.338 { 00:12:46.338 "subsystems": [ 00:12:46.338 { 00:12:46.338 "subsystem": "bdev", 00:12:46.338 "config": [ 00:12:46.338 { 00:12:46.338 "params": { 00:12:46.338 "io_mechanism": "libaio", 00:12:46.338 "conserve_cpu": true, 00:12:46.338 "filename": "/dev/nvme0n1", 00:12:46.338 "name": "xnvme_bdev" 00:12:46.338 }, 00:12:46.338 "method": "bdev_xnvme_create" 00:12:46.338 }, 00:12:46.338 { 00:12:46.338 "method": "bdev_wait_for_examine" 00:12:46.338 } 00:12:46.338 ] 00:12:46.338 } 00:12:46.338 ] 00:12:46.338 } 00:12:46.599 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:46.599 fio-3.35 00:12:46.599 Starting 1 thread 00:12:53.188 00:12:53.188 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71566: Fri Dec 13 22:53:31 2024 00:12:53.188 write: IOPS=37.6k, BW=147MiB/s (154MB/s)(735MiB/5001msec); 0 zone resets 00:12:53.188 slat (usec): min=4, max=1781, avg=20.16, stdev=73.08 00:12:53.188 clat (usec): min=106, max=6465, avg=1141.38, stdev=557.18 00:12:53.188 lat (usec): min=167, max=6478, avg=1161.53, stdev=554.46 00:12:53.188 clat percentiles (usec): 00:12:53.188 | 1.00th=[ 227], 5.00th=[ 355], 10.00th=[ 486], 20.00th=[ 676], 00:12:53.188 | 30.00th=[ 824], 40.00th=[ 963], 50.00th=[ 1090], 60.00th=[ 1205], 00:12:53.188 | 70.00th=[ 1352], 80.00th=[ 1516], 90.00th=[ 1827], 95.00th=[ 2147], 00:12:53.188 | 99.00th=[ 2933], 99.50th=[ 3261], 99.90th=[ 3916], 99.95th=[ 4228], 00:12:53.188 | 99.99th=[ 4817] 00:12:53.188 bw ( KiB/s): min=140064, max=156624, per=98.24%, avg=147777.78, stdev=6255.32, samples=9 00:12:53.188 iops : min=35016, max=39156, avg=36944.44, stdev=1563.83, samples=9 00:12:53.188 lat (usec) : 250=1.54%, 500=9.14%, 750=14.16%, 1000=17.91% 00:12:53.188 lat (msec) : 2=50.47%, 4=6.70%, 10=0.07% 00:12:53.188 cpu : usr=38.98%, sys=50.64%, ctx=124, majf=0, minf=765 00:12:53.188 IO depths : 1=0.4%, 2=1.1%, 4=3.5%, 8=9.6%, 16=24.4%, 32=59.1%, >=64=2.0% 00:12:53.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.188 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:12:53.188 issued rwts: total=0,188061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.188 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:53.188 00:12:53.188 Run status group 0 (all jobs): 00:12:53.188 WRITE: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=735MiB (770MB), run=5001-5001msec 00:12:53.188 ----------------------------------------------------- 00:12:53.188 Suppressions used: 00:12:53.188 count bytes template 00:12:53.188 1 11 /usr/src/fio/parse.c 00:12:53.188 1 8 libtcmalloc_minimal.so 00:12:53.188 1 904 libcrypto.so 00:12:53.188 ----------------------------------------------------- 00:12:53.188 00:12:53.188 00:12:53.188 real 0m13.794s 00:12:53.188 user 0m7.029s 00:12:53.188 sys 0m5.430s 00:12:53.188 22:53:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.188 ************************************ 00:12:53.188 END TEST xnvme_fio_plugin 00:12:53.188 ************************************ 00:12:53.188 22:53:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:53.188 22:53:32 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:53.188 22:53:32 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:53.188 22:53:32 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:53.188 22:53:32 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:53.188 22:53:32 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:53.188 22:53:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:53.188 22:53:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:53.188 22:53:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:53.188 22:53:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:53.188 22:53:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:53.188 22:53:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.188 22:53:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.188 ************************************ 00:12:53.188 START TEST xnvme_rpc 00:12:53.188 ************************************ 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:53.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71652 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71652 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71652 ']' 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.188 22:53:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:53.450 [2024-12-13 22:53:32.397404] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:53.450 [2024-12-13 22:53:32.397519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71652 ] 00:12:53.450 [2024-12-13 22:53:32.558274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.710 [2024-12-13 22:53:32.652793] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.281 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.281 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:54.281 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:54.281 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.281 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.281 xnvme_bdev 00:12:54.281 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.281 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:54.281 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71652 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71652 ']' 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71652 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.282 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71652 00:12:54.542 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.542 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.542 killing process with pid 71652 00:12:54.542 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71652' 00:12:54.542 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71652 00:12:54.542 22:53:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71652 00:12:55.929 00:12:55.929 real 0m2.625s 00:12:55.929 user 0m2.711s 00:12:55.929 sys 0m0.373s 00:12:55.929 22:53:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.929 22:53:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.929 ************************************ 00:12:55.929 END TEST xnvme_rpc 00:12:55.929 ************************************ 00:12:55.929 22:53:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:55.929 22:53:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.929 22:53:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.929 22:53:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.929 ************************************ 00:12:55.929 START TEST xnvme_bdevperf 00:12:55.929 ************************************ 00:12:55.929 22:53:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:55.929 22:53:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:55.929 22:53:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:12:55.929 22:53:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:55.929 22:53:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:55.929 22:53:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:55.929 22:53:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:55.929 22:53:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:55.929 { 00:12:55.929 "subsystems": [ 00:12:55.929 { 00:12:55.929 "subsystem": "bdev", 00:12:55.929 "config": [ 00:12:55.929 { 00:12:55.929 "params": { 00:12:55.929 "io_mechanism": "io_uring", 00:12:55.929 "conserve_cpu": false, 00:12:55.929 "filename": "/dev/nvme0n1", 00:12:55.929 "name": "xnvme_bdev" 00:12:55.929 }, 00:12:55.929 "method": "bdev_xnvme_create" 00:12:55.929 }, 00:12:55.929 { 00:12:55.929 "method": "bdev_wait_for_examine" 00:12:55.929 } 00:12:55.929 ] 00:12:55.929 } 00:12:55.929 ] 00:12:55.929 } 00:12:56.190 [2024-12-13 22:53:35.073948] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:56.190 [2024-12-13 22:53:35.074060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71717 ] 00:12:56.190 [2024-12-13 22:53:35.233110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.451 [2024-12-13 22:53:35.343832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.711 Running I/O for 5 seconds... 00:12:58.597 37311.00 IOPS, 145.75 MiB/s [2024-12-13T22:53:38.679Z] 37691.50 IOPS, 147.23 MiB/s [2024-12-13T22:53:40.064Z] 38079.00 IOPS, 148.75 MiB/s [2024-12-13T22:53:40.637Z] 38256.50 IOPS, 149.44 MiB/s 00:13:01.497 Latency(us) 00:13:01.497 [2024-12-13T22:53:40.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.497 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:01.497 xnvme_bdev : 5.00 38210.08 149.26 0.00 0.00 1670.89 346.58 7662.67 00:13:01.497 [2024-12-13T22:53:40.637Z] =================================================================================================================== 00:13:01.497 [2024-12-13T22:53:40.637Z] Total : 38210.08 149.26 0.00 0.00 1670.89 346.58 7662.67 00:13:02.442 22:53:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:02.442 22:53:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:02.442 22:53:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:02.442 22:53:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:02.442 22:53:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:02.442 { 00:13:02.442 "subsystems": [ 00:13:02.442 { 00:13:02.442 "subsystem": "bdev", 00:13:02.442 "config": [ 00:13:02.442 { 00:13:02.442 "params": { 00:13:02.442 "io_mechanism": "io_uring", 00:13:02.442 "conserve_cpu": false, 00:13:02.442 "filename": "/dev/nvme0n1", 00:13:02.442 "name": "xnvme_bdev" 00:13:02.442 }, 00:13:02.442 "method": "bdev_xnvme_create" 00:13:02.442 }, 00:13:02.442 { 00:13:02.442 "method": "bdev_wait_for_examine" 00:13:02.442 } 00:13:02.442 ] 00:13:02.442 } 00:13:02.442 ] 00:13:02.442 } 00:13:02.442 [2024-12-13 22:53:41.402866] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:02.442 [2024-12-13 22:53:41.402979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71796 ] 00:13:02.442 [2024-12-13 22:53:41.562126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.703 [2024-12-13 22:53:41.659323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.965 Running I/O for 5 seconds... 00:13:04.915 37945.00 IOPS, 148.22 MiB/s [2024-12-13T22:53:44.998Z] 36027.00 IOPS, 140.73 MiB/s [2024-12-13T22:53:45.941Z] 35733.00 IOPS, 139.58 MiB/s [2024-12-13T22:53:47.327Z] 35774.75 IOPS, 139.75 MiB/s 00:13:08.187 Latency(us) 00:13:08.187 [2024-12-13T22:53:47.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.187 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:08.187 xnvme_bdev : 5.00 36291.33 141.76 0.00 0.00 1759.01 118.15 10737.82 00:13:08.187 [2024-12-13T22:53:47.327Z] =================================================================================================================== 00:13:08.187 [2024-12-13T22:53:47.327Z] Total : 36291.33 141.76 0.00 0.00 1759.01 118.15 10737.82 00:13:08.759 00:13:08.759 real 0m12.649s 00:13:08.759 user 0m5.952s 00:13:08.759 sys 0m6.428s 00:13:08.759 ************************************ 00:13:08.759 END TEST xnvme_bdevperf 00:13:08.759 ************************************ 00:13:08.759 22:53:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.759 22:53:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:08.759 22:53:47 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:08.760 22:53:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:08.760 22:53:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.760 22:53:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.760 ************************************ 00:13:08.760 START TEST xnvme_fio_plugin 00:13:08.760 ************************************ 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:08.760 22:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:08.760 { 00:13:08.760 "subsystems": [ 00:13:08.760 { 00:13:08.760 "subsystem": "bdev", 00:13:08.760 "config": [ 00:13:08.760 { 00:13:08.760 "params": { 00:13:08.760 "io_mechanism": "io_uring", 00:13:08.760 "conserve_cpu": false, 00:13:08.760 "filename": "/dev/nvme0n1", 00:13:08.760 "name": "xnvme_bdev" 00:13:08.760 }, 00:13:08.760 "method": "bdev_xnvme_create" 00:13:08.760 }, 00:13:08.760 { 00:13:08.760 "method": "bdev_wait_for_examine" 00:13:08.760 } 00:13:08.760 ] 00:13:08.760 } 00:13:08.760 ] 00:13:08.760 } 00:13:09.022 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:09.022 fio-3.35 00:13:09.022 Starting 1 thread 00:13:15.609 00:13:15.610 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71915: Fri Dec 13 22:53:53 2024 00:13:15.610 read: IOPS=34.7k, BW=136MiB/s (142MB/s)(678MiB/5001msec) 00:13:15.610 slat (nsec): min=2859, max=92580, avg=3877.16, stdev=2270.44 00:13:15.610 clat (usec): min=777, max=5621, avg=1686.03, stdev=344.97 00:13:15.610 lat (usec): min=779, max=5629, avg=1689.91, stdev=345.35 00:13:15.610 clat percentiles (usec): 00:13:15.610 | 1.00th=[ 1057], 5.00th=[ 1188], 10.00th=[ 1270], 20.00th=[ 1385], 00:13:15.610 | 30.00th=[ 1483], 40.00th=[ 1565], 50.00th=[ 1663], 60.00th=[ 1745], 00:13:15.610 | 70.00th=[ 1844], 80.00th=[ 1958], 90.00th=[ 2114], 95.00th=[ 2278], 00:13:15.610 | 99.00th=[ 2606], 99.50th=[ 2769], 99.90th=[ 3163], 99.95th=[ 3589], 00:13:15.610 | 99.99th=[ 5538] 00:13:15.610 bw ( KiB/s): min=131072, max=153600, per=100.00%, avg=139548.44, stdev=7849.28, samples=9 00:13:15.610 iops : min=32768, max=38400, avg=34887.11, stdev=1962.32, samples=9 00:13:15.610 lat (usec) : 1000=0.45% 00:13:15.610 lat (msec) : 2=82.28%, 4=17.23%, 10=0.04% 00:13:15.610 cpu : usr=32.68%, sys=65.96%, ctx=14, majf=0, minf=762 00:13:15.610 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:15.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.610 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:15.610 issued rwts: total=173632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.610 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:15.610 00:13:15.610 Run status group 0 (all jobs): 00:13:15.610 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=678MiB (711MB), run=5001-5001msec 00:13:15.610 ----------------------------------------------------- 00:13:15.610 Suppressions used: 00:13:15.610 count bytes template 00:13:15.610 1 11 /usr/src/fio/parse.c 00:13:15.610 1 8 libtcmalloc_minimal.so 00:13:15.610 1 904 libcrypto.so 00:13:15.610 ----------------------------------------------------- 00:13:15.610 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:15.610 22:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:15.610 { 00:13:15.610 "subsystems": [ 00:13:15.610 { 00:13:15.610 "subsystem": "bdev", 00:13:15.610 "config": [ 00:13:15.610 { 00:13:15.610 "params": { 00:13:15.610 "io_mechanism": "io_uring", 00:13:15.610 "conserve_cpu": false, 00:13:15.610 "filename": "/dev/nvme0n1", 00:13:15.610 "name": "xnvme_bdev" 00:13:15.610 }, 00:13:15.610 "method": "bdev_xnvme_create" 00:13:15.610 }, 00:13:15.610 { 00:13:15.610 "method": "bdev_wait_for_examine" 00:13:15.610 } 00:13:15.610 ] 00:13:15.610 } 00:13:15.610 ] 00:13:15.610 } 00:13:15.870 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:15.870 fio-3.35 00:13:15.870 Starting 1 thread 00:13:22.456 00:13:22.456 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72001: Fri Dec 13 22:54:00 2024 00:13:22.456 write: IOPS=37.2k, BW=145MiB/s (152MB/s)(726MiB/5001msec); 0 zone resets 00:13:22.456 slat (nsec): min=2892, max=89030, avg=3986.72, stdev=2300.80 00:13:22.456 clat (usec): min=189, max=100636, avg=1562.42, stdev=1794.19 00:13:22.456 lat (usec): min=193, max=100639, avg=1566.40, stdev=1794.25 00:13:22.456 clat percentiles (usec): 00:13:22.456 | 1.00th=[ 963], 5.00th=[ 1090], 10.00th=[ 1156], 20.00th=[ 1254], 00:13:22.456 | 30.00th=[ 1336], 40.00th=[ 1418], 50.00th=[ 1500], 60.00th=[ 1582], 00:13:22.456 | 70.00th=[ 1680], 80.00th=[ 1778], 90.00th=[ 1909], 95.00th=[ 2073], 00:13:22.456 | 99.00th=[ 2442], 99.50th=[ 2573], 99.90th=[ 3556], 99.95th=[ 4015], 00:13:22.456 | 99.99th=[99091] 00:13:22.456 bw ( KiB/s): min=125504, max=157696, per=100.00%, avg=148630.22, stdev=9684.72, samples=9 00:13:22.456 iops : min=31376, max=39424, avg=37157.56, stdev=2421.18, samples=9 00:13:22.456 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=1.75% 00:13:22.456 lat (msec) : 2=91.53%, 4=6.64%, 10=0.02%, 100=0.03%, 250=0.01% 00:13:22.456 cpu : usr=33.60%, sys=65.16%, ctx=15, majf=0, minf=763 00:13:22.456 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:22.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.456 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:22.456 issued rwts: total=0,185828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.456 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.456 00:13:22.456 Run status group 0 (all jobs): 00:13:22.456 WRITE: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=726MiB (761MB), run=5001-5001msec 00:13:22.456 ----------------------------------------------------- 00:13:22.456 Suppressions used: 00:13:22.456 count bytes template 00:13:22.456 1 11 /usr/src/fio/parse.c 00:13:22.456 1 8 libtcmalloc_minimal.so 00:13:22.456 1 904 libcrypto.so 00:13:22.456 ----------------------------------------------------- 00:13:22.456 00:13:22.456 00:13:22.456 real 0m13.714s 00:13:22.456 user 0m6.132s 00:13:22.456 sys 0m7.121s 00:13:22.456 22:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.456 ************************************ 00:13:22.456 22:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:22.456 END TEST xnvme_fio_plugin 00:13:22.456 ************************************ 00:13:22.456 22:54:01 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:22.456 22:54:01 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:22.456 22:54:01 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:22.456 22:54:01 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:22.456 22:54:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:22.456 22:54:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.456 22:54:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.456 ************************************ 00:13:22.456 START TEST xnvme_rpc 00:13:22.456 ************************************ 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:22.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72087 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72087 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72087 ']' 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.456 22:54:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:22.457 [2024-12-13 22:54:01.577583] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:22.457 [2024-12-13 22:54:01.577705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72087 ] 00:13:22.718 [2024-12-13 22:54:01.738436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.978 [2024-12-13 22:54:01.856968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.548 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.548 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.549 xnvme_bdev 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.549 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.810 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:23.810 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:23.810 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72087 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72087 ']' 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72087 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72087 00:13:23.811 killing process with pid 72087 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72087' 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72087 00:13:23.811 22:54:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72087 00:13:25.734 ************************************ 00:13:25.734 END TEST xnvme_rpc 00:13:25.734 ************************************ 00:13:25.734 00:13:25.734 real 0m2.884s 00:13:25.734 user 0m2.895s 00:13:25.734 sys 0m0.470s 00:13:25.734 22:54:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.734 22:54:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.734 22:54:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:25.734 22:54:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.734 22:54:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.734 22:54:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.734 ************************************ 00:13:25.734 START TEST xnvme_bdevperf 00:13:25.734 ************************************ 00:13:25.734 22:54:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:25.734 22:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:25.734 22:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:25.734 22:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:25.734 22:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:25.734 22:54:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:25.734 22:54:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:25.734 22:54:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:25.734 { 00:13:25.734 "subsystems": [ 00:13:25.734 { 00:13:25.734 "subsystem": "bdev", 00:13:25.734 "config": [ 00:13:25.734 { 00:13:25.734 "params": { 00:13:25.734 "io_mechanism": "io_uring", 00:13:25.734 "conserve_cpu": true, 00:13:25.734 "filename": "/dev/nvme0n1", 00:13:25.734 "name": "xnvme_bdev" 00:13:25.734 }, 00:13:25.734 "method": "bdev_xnvme_create" 00:13:25.734 }, 00:13:25.734 { 00:13:25.734 "method": "bdev_wait_for_examine" 00:13:25.734 } 00:13:25.734 ] 00:13:25.734 } 00:13:25.734 ] 00:13:25.734 } 00:13:25.734 [2024-12-13 22:54:04.526234] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:25.734 [2024-12-13 22:54:04.526377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72156 ] 00:13:25.734 [2024-12-13 22:54:04.690925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.734 [2024-12-13 22:54:04.814794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.995 Running I/O for 5 seconds... 00:13:28.320 34714.00 IOPS, 135.60 MiB/s [2024-12-13T22:54:08.402Z] 34757.50 IOPS, 135.77 MiB/s [2024-12-13T22:54:09.344Z] 36170.00 IOPS, 141.29 MiB/s [2024-12-13T22:54:10.289Z] 37329.25 IOPS, 145.82 MiB/s [2024-12-13T22:54:10.289Z] 37853.00 IOPS, 147.86 MiB/s 00:13:31.149 Latency(us) 00:13:31.149 [2024-12-13T22:54:10.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.149 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:31.149 xnvme_bdev : 5.00 37850.08 147.85 0.00 0.00 1686.56 674.26 11191.53 00:13:31.149 [2024-12-13T22:54:10.289Z] =================================================================================================================== 00:13:31.149 [2024-12-13T22:54:10.289Z] Total : 37850.08 147.85 0.00 0.00 1686.56 674.26 11191.53 00:13:31.735 22:54:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:31.735 22:54:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:31.735 22:54:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:31.735 22:54:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:31.735 22:54:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:31.735 { 00:13:31.735 "subsystems": [ 00:13:31.735 { 00:13:31.735 "subsystem": "bdev", 00:13:31.735 "config": [ 00:13:31.735 { 00:13:31.735 "params": { 00:13:31.735 "io_mechanism": "io_uring", 00:13:31.735 "conserve_cpu": true, 00:13:31.735 "filename": "/dev/nvme0n1", 00:13:31.735 "name": "xnvme_bdev" 00:13:31.735 }, 00:13:31.735 "method": "bdev_xnvme_create" 00:13:31.735 }, 00:13:31.735 { 00:13:31.735 "method": "bdev_wait_for_examine" 00:13:31.735 } 00:13:31.735 ] 00:13:31.735 } 00:13:31.735 ] 00:13:31.735 } 00:13:31.997 [2024-12-13 22:54:10.893899] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:31.997 [2024-12-13 22:54:10.894012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72231 ] 00:13:31.997 [2024-12-13 22:54:11.053904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.258 [2024-12-13 22:54:11.152460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.518 Running I/O for 5 seconds... 00:13:34.404 40185.00 IOPS, 156.97 MiB/s [2024-12-13T22:54:14.487Z] 37376.50 IOPS, 146.00 MiB/s [2024-12-13T22:54:15.430Z] 36299.33 IOPS, 141.79 MiB/s [2024-12-13T22:54:16.814Z] 36030.75 IOPS, 140.75 MiB/s 00:13:37.674 Latency(us) 00:13:37.674 [2024-12-13T22:54:16.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.674 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:37.674 xnvme_bdev : 5.00 37044.08 144.70 0.00 0.00 1723.25 124.46 10838.65 00:13:37.674 [2024-12-13T22:54:16.814Z] =================================================================================================================== 00:13:37.674 [2024-12-13T22:54:16.814Z] Total : 37044.08 144.70 0.00 0.00 1723.25 124.46 10838.65 00:13:38.246 00:13:38.246 real 0m12.662s 00:13:38.246 user 0m7.986s 00:13:38.246 sys 0m4.045s 00:13:38.246 22:54:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.246 22:54:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:38.246 ************************************ 00:13:38.246 END TEST xnvme_bdevperf 00:13:38.246 ************************************ 00:13:38.246 22:54:17 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:38.246 22:54:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:38.246 22:54:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.246 22:54:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.246 ************************************ 00:13:38.246 START TEST xnvme_fio_plugin 00:13:38.246 ************************************ 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:38.246 22:54:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:38.246 { 00:13:38.246 "subsystems": [ 00:13:38.246 { 00:13:38.246 "subsystem": "bdev", 00:13:38.246 "config": [ 00:13:38.246 { 00:13:38.246 "params": { 00:13:38.246 "io_mechanism": "io_uring", 00:13:38.246 "conserve_cpu": true, 00:13:38.246 "filename": "/dev/nvme0n1", 00:13:38.246 "name": "xnvme_bdev" 00:13:38.246 }, 00:13:38.246 "method": "bdev_xnvme_create" 00:13:38.246 }, 00:13:38.246 { 00:13:38.246 "method": "bdev_wait_for_examine" 00:13:38.246 } 00:13:38.246 ] 00:13:38.246 } 00:13:38.246 ] 00:13:38.246 } 00:13:38.246 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:38.246 fio-3.35 00:13:38.246 Starting 1 thread 00:13:44.834 00:13:44.834 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72350: Fri Dec 13 22:54:22 2024 00:13:44.834 read: IOPS=36.2k, BW=141MiB/s (148MB/s)(708MiB/5002msec) 00:13:44.834 slat (usec): min=2, max=150, avg= 3.85, stdev= 2.19 00:13:44.834 clat (usec): min=370, max=3829, avg=1611.46, stdev=317.65 00:13:44.834 lat (usec): min=387, max=3833, avg=1615.31, stdev=318.23 00:13:44.834 clat percentiles (usec): 00:13:44.834 | 1.00th=[ 1037], 5.00th=[ 1156], 10.00th=[ 1237], 20.00th=[ 1336], 00:13:44.834 | 30.00th=[ 1418], 40.00th=[ 1500], 50.00th=[ 1582], 60.00th=[ 1663], 00:13:44.834 | 70.00th=[ 1745], 80.00th=[ 1876], 90.00th=[ 2024], 95.00th=[ 2180], 00:13:44.834 | 99.00th=[ 2507], 99.50th=[ 2671], 99.90th=[ 3064], 99.95th=[ 3163], 00:13:44.834 | 99.99th=[ 3326] 00:13:44.834 bw ( KiB/s): min=132608, max=160256, per=99.63%, avg=144353.78, stdev=11352.74, samples=9 00:13:44.834 iops : min=33152, max=40064, avg=36088.44, stdev=2838.19, samples=9 00:13:44.834 lat (usec) : 500=0.01%, 1000=0.58% 00:13:44.834 lat (msec) : 2=87.98%, 4=11.44% 00:13:44.834 cpu : usr=52.15%, sys=44.07%, ctx=13, majf=0, minf=762 00:13:44.834 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:44.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.834 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:44.834 issued rwts: total=181180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.834 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:44.834 00:13:44.834 Run status group 0 (all jobs): 00:13:44.834 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=708MiB (742MB), run=5002-5002msec 00:13:44.834 ----------------------------------------------------- 00:13:44.834 Suppressions used: 00:13:44.834 count bytes template 00:13:44.834 1 11 /usr/src/fio/parse.c 00:13:44.834 1 8 libtcmalloc_minimal.so 00:13:44.834 1 904 libcrypto.so 00:13:44.834 ----------------------------------------------------- 00:13:44.834 00:13:45.096 22:54:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:45.096 22:54:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:45.096 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:45.096 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:45.096 22:54:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:45.096 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:45.096 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:45.096 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:45.096 22:54:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:45.096 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:45.096 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:45.097 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:45.097 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:45.097 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:45.097 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:45.097 22:54:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:45.097 22:54:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:45.097 22:54:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:45.097 22:54:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:45.097 22:54:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:45.097 22:54:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:45.097 { 00:13:45.097 "subsystems": [ 00:13:45.097 { 00:13:45.097 "subsystem": "bdev", 00:13:45.097 "config": [ 00:13:45.097 { 00:13:45.097 "params": { 00:13:45.097 "io_mechanism": "io_uring", 00:13:45.097 "conserve_cpu": true, 00:13:45.097 "filename": "/dev/nvme0n1", 00:13:45.097 "name": "xnvme_bdev" 00:13:45.097 }, 00:13:45.097 "method": "bdev_xnvme_create" 00:13:45.097 }, 00:13:45.097 { 00:13:45.097 "method": "bdev_wait_for_examine" 00:13:45.097 } 00:13:45.097 ] 00:13:45.097 } 00:13:45.097 ] 00:13:45.097 } 00:13:45.097 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:45.097 fio-3.35 00:13:45.097 Starting 1 thread 00:13:51.683 00:13:51.683 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72442: Fri Dec 13 22:54:29 2024 00:13:51.683 write: IOPS=39.0k, BW=152MiB/s (160MB/s)(761MiB/5001msec); 0 zone resets 00:13:51.683 slat (usec): min=2, max=285, avg= 3.97, stdev= 2.31 00:13:51.683 clat (usec): min=142, max=8739, avg=1486.27, stdev=323.70 00:13:51.683 lat (usec): min=147, max=8742, avg=1490.24, stdev=324.22 00:13:51.683 clat percentiles (usec): 00:13:51.683 | 1.00th=[ 922], 5.00th=[ 1045], 10.00th=[ 1123], 20.00th=[ 1221], 00:13:51.683 | 30.00th=[ 1303], 40.00th=[ 1385], 50.00th=[ 1450], 60.00th=[ 1532], 00:13:51.683 | 70.00th=[ 1614], 80.00th=[ 1729], 90.00th=[ 1893], 95.00th=[ 2040], 00:13:51.683 | 99.00th=[ 2409], 99.50th=[ 2540], 99.90th=[ 2999], 99.95th=[ 3621], 00:13:51.683 | 99.99th=[ 6980] 00:13:51.683 bw ( KiB/s): min=137672, max=172184, per=100.00%, avg=155979.56, stdev=10989.31, samples=9 00:13:51.683 iops : min=34418, max=43046, avg=38994.89, stdev=2747.33, samples=9 00:13:51.683 lat (usec) : 250=0.01%, 500=0.01%, 750=0.05%, 1000=2.82% 00:13:51.683 lat (msec) : 2=91.07%, 4=6.01%, 10=0.03% 00:13:51.683 cpu : usr=55.10%, sys=41.08%, ctx=20, majf=0, minf=763 00:13:51.683 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:51.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.683 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:51.683 issued rwts: total=0,194805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:51.683 00:13:51.683 Run status group 0 (all jobs): 00:13:51.683 WRITE: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=761MiB (798MB), run=5001-5001msec 00:13:51.683 ----------------------------------------------------- 00:13:51.683 Suppressions used: 00:13:51.683 count bytes template 00:13:51.683 1 11 /usr/src/fio/parse.c 00:13:51.683 1 8 libtcmalloc_minimal.so 00:13:51.683 1 904 libcrypto.so 00:13:51.683 ----------------------------------------------------- 00:13:51.683 00:13:51.683 00:13:51.683 real 0m13.617s 00:13:51.683 user 0m8.139s 00:13:51.683 sys 0m4.772s 00:13:51.683 22:54:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.683 22:54:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:51.683 ************************************ 00:13:51.683 END TEST xnvme_fio_plugin 00:13:51.683 ************************************ 00:13:51.944 22:54:30 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:51.944 22:54:30 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:51.944 22:54:30 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:51.944 22:54:30 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:51.944 22:54:30 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:51.944 22:54:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:51.944 22:54:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:51.944 22:54:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:51.944 22:54:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:51.944 22:54:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.944 22:54:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.944 22:54:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.944 ************************************ 00:13:51.944 START TEST xnvme_rpc 00:13:51.944 ************************************ 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72523 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72523 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72523 ']' 00:13:51.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.944 22:54:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.945 [2024-12-13 22:54:30.917647] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:51.945 [2024-12-13 22:54:30.917772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72523 ] 00:13:51.945 [2024-12-13 22:54:31.077834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.205 [2024-12-13 22:54:31.173461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.777 xnvme_bdev 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.777 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:53.038 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.038 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:53.038 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72523 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72523 ']' 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72523 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.039 22:54:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72523 00:13:53.039 killing process with pid 72523 00:13:53.039 22:54:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.039 22:54:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.039 22:54:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72523' 00:13:53.039 22:54:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72523 00:13:53.039 22:54:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72523 00:13:54.955 ************************************ 00:13:54.955 END TEST xnvme_rpc 00:13:54.955 ************************************ 00:13:54.956 00:13:54.956 real 0m2.826s 00:13:54.956 user 0m2.879s 00:13:54.956 sys 0m0.419s 00:13:54.956 22:54:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.956 22:54:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.956 22:54:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:54.956 22:54:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:54.956 22:54:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.956 22:54:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:54.956 ************************************ 00:13:54.956 START TEST xnvme_bdevperf 00:13:54.956 ************************************ 00:13:54.956 22:54:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:54.956 22:54:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:54.956 22:54:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:13:54.956 22:54:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:54.956 22:54:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:54.956 22:54:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:54.956 22:54:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:54.956 22:54:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:54.956 { 00:13:54.956 "subsystems": [ 00:13:54.956 { 00:13:54.956 "subsystem": "bdev", 00:13:54.956 "config": [ 00:13:54.956 { 00:13:54.956 "params": { 00:13:54.956 "io_mechanism": "io_uring_cmd", 00:13:54.956 "conserve_cpu": false, 00:13:54.956 "filename": "/dev/ng0n1", 00:13:54.956 "name": "xnvme_bdev" 00:13:54.956 }, 00:13:54.956 "method": "bdev_xnvme_create" 00:13:54.956 }, 00:13:54.956 { 00:13:54.956 "method": "bdev_wait_for_examine" 00:13:54.956 } 00:13:54.956 ] 00:13:54.956 } 00:13:54.956 ] 00:13:54.956 } 00:13:54.956 [2024-12-13 22:54:33.808225] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:54.956 [2024-12-13 22:54:33.808364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72597 ] 00:13:54.956 [2024-12-13 22:54:33.973470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.216 [2024-12-13 22:54:34.098450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.477 Running I/O for 5 seconds... 00:13:57.360 34576.00 IOPS, 135.06 MiB/s [2024-12-13T22:54:37.442Z] 34416.00 IOPS, 134.44 MiB/s [2024-12-13T22:54:38.855Z] 34517.67 IOPS, 134.83 MiB/s [2024-12-13T22:54:39.427Z] 34519.75 IOPS, 134.84 MiB/s [2024-12-13T22:54:39.427Z] 34622.40 IOPS, 135.24 MiB/s 00:14:00.287 Latency(us) 00:14:00.287 [2024-12-13T22:54:39.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.287 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:00.287 xnvme_bdev : 5.00 34607.66 135.19 0.00 0.00 1844.90 385.97 11393.18 00:14:00.287 [2024-12-13T22:54:39.427Z] =================================================================================================================== 00:14:00.287 [2024-12-13T22:54:39.428Z] Total : 34607.66 135.19 0.00 0.00 1844.90 385.97 11393.18 00:14:01.233 22:54:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:01.233 22:54:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:01.233 22:54:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:01.233 22:54:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:01.233 22:54:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:01.233 { 00:14:01.233 "subsystems": [ 00:14:01.233 { 00:14:01.233 "subsystem": "bdev", 00:14:01.233 "config": [ 00:14:01.233 { 00:14:01.233 "params": { 00:14:01.233 "io_mechanism": "io_uring_cmd", 00:14:01.233 "conserve_cpu": false, 00:14:01.233 "filename": "/dev/ng0n1", 00:14:01.233 "name": "xnvme_bdev" 00:14:01.233 }, 00:14:01.233 "method": "bdev_xnvme_create" 00:14:01.233 }, 00:14:01.233 { 00:14:01.233 "method": "bdev_wait_for_examine" 00:14:01.233 } 00:14:01.233 ] 00:14:01.233 } 00:14:01.233 ] 00:14:01.233 } 00:14:01.233 [2024-12-13 22:54:40.278638] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:01.233 [2024-12-13 22:54:40.278821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72671 ] 00:14:01.495 [2024-12-13 22:54:40.443154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.495 [2024-12-13 22:54:40.562201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.756 Running I/O for 5 seconds... 00:14:03.777 36364.00 IOPS, 142.05 MiB/s [2024-12-13T22:54:43.861Z] 35938.00 IOPS, 140.38 MiB/s [2024-12-13T22:54:45.246Z] 35609.33 IOPS, 139.10 MiB/s [2024-12-13T22:54:46.189Z] 35454.25 IOPS, 138.49 MiB/s 00:14:07.049 Latency(us) 00:14:07.049 [2024-12-13T22:54:46.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.049 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:07.049 xnvme_bdev : 5.00 35299.07 137.89 0.00 0.00 1808.45 64.59 11342.77 00:14:07.049 [2024-12-13T22:54:46.189Z] =================================================================================================================== 00:14:07.049 [2024-12-13T22:54:46.189Z] Total : 35299.07 137.89 0.00 0.00 1808.45 64.59 11342.77 00:14:07.621 22:54:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.621 22:54:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:07.621 22:54:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.622 22:54:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.622 22:54:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.622 { 00:14:07.622 "subsystems": [ 00:14:07.622 { 00:14:07.622 "subsystem": "bdev", 00:14:07.622 "config": [ 00:14:07.622 { 00:14:07.622 "params": { 00:14:07.622 "io_mechanism": "io_uring_cmd", 00:14:07.622 "conserve_cpu": false, 00:14:07.622 "filename": "/dev/ng0n1", 00:14:07.622 "name": "xnvme_bdev" 00:14:07.622 }, 00:14:07.622 "method": "bdev_xnvme_create" 00:14:07.622 }, 00:14:07.622 { 00:14:07.622 "method": "bdev_wait_for_examine" 00:14:07.622 } 00:14:07.622 ] 00:14:07.622 } 00:14:07.622 ] 00:14:07.622 } 00:14:07.622 [2024-12-13 22:54:46.636308] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:07.622 [2024-12-13 22:54:46.636422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72745 ] 00:14:07.883 [2024-12-13 22:54:46.797502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.883 [2024-12-13 22:54:46.893922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.144 Running I/O for 5 seconds... 00:14:10.032 64256.00 IOPS, 251.00 MiB/s [2024-12-13T22:54:50.570Z] 64480.00 IOPS, 251.88 MiB/s [2024-12-13T22:54:51.154Z] 66368.00 IOPS, 259.25 MiB/s [2024-12-13T22:54:52.539Z] 70224.00 IOPS, 274.31 MiB/s [2024-12-13T22:54:52.539Z] 71936.00 IOPS, 281.00 MiB/s 00:14:13.399 Latency(us) 00:14:13.399 [2024-12-13T22:54:52.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.399 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:13.399 xnvme_bdev : 5.00 71898.73 280.85 0.00 0.00 886.66 450.56 2722.26 00:14:13.399 [2024-12-13T22:54:52.539Z] =================================================================================================================== 00:14:13.399 [2024-12-13T22:54:52.539Z] Total : 71898.73 280.85 0.00 0.00 886.66 450.56 2722.26 00:14:13.659 22:54:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.659 22:54:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:13.659 22:54:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:13.659 22:54:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:13.659 22:54:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.659 { 00:14:13.659 "subsystems": [ 00:14:13.659 { 00:14:13.659 "subsystem": "bdev", 00:14:13.659 "config": [ 00:14:13.659 { 00:14:13.659 "params": { 00:14:13.659 "io_mechanism": "io_uring_cmd", 00:14:13.659 "conserve_cpu": false, 00:14:13.659 "filename": "/dev/ng0n1", 00:14:13.659 "name": "xnvme_bdev" 00:14:13.659 }, 00:14:13.659 "method": "bdev_xnvme_create" 00:14:13.659 }, 00:14:13.659 { 00:14:13.659 "method": "bdev_wait_for_examine" 00:14:13.659 } 00:14:13.659 ] 00:14:13.659 } 00:14:13.659 ] 00:14:13.659 } 00:14:13.659 [2024-12-13 22:54:52.757410] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:13.659 [2024-12-13 22:54:52.757530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72814 ] 00:14:13.918 [2024-12-13 22:54:52.913114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.918 [2024-12-13 22:54:52.992559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.176 Running I/O for 5 seconds... 00:14:16.058 557.00 IOPS, 2.18 MiB/s [2024-12-13T22:54:56.582Z] 407.50 IOPS, 1.59 MiB/s [2024-12-13T22:54:57.523Z] 420.67 IOPS, 1.64 MiB/s [2024-12-13T22:54:58.465Z] 440.25 IOPS, 1.72 MiB/s [2024-12-13T22:54:58.465Z] 2077.40 IOPS, 8.11 MiB/s 00:14:19.325 Latency(us) 00:14:19.325 [2024-12-13T22:54:58.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.325 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:19.325 xnvme_bdev : 5.11 2047.33 8.00 0.00 0.00 30941.54 68.14 535580.36 00:14:19.325 [2024-12-13T22:54:58.465Z] =================================================================================================================== 00:14:19.325 [2024-12-13T22:54:58.465Z] Total : 2047.33 8.00 0.00 0.00 30941.54 68.14 535580.36 00:14:19.896 00:14:19.896 real 0m25.277s 00:14:19.896 user 0m14.172s 00:14:19.896 sys 0m10.595s 00:14:19.896 22:54:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.896 22:54:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:19.896 ************************************ 00:14:19.896 END TEST xnvme_bdevperf 00:14:19.896 ************************************ 00:14:20.157 22:54:59 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:20.157 22:54:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:20.157 22:54:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.157 22:54:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.157 ************************************ 00:14:20.157 START TEST xnvme_fio_plugin 00:14:20.157 ************************************ 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:20.157 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:20.158 22:54:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:20.158 { 00:14:20.158 "subsystems": [ 00:14:20.158 { 00:14:20.158 "subsystem": "bdev", 00:14:20.158 "config": [ 00:14:20.158 { 00:14:20.158 "params": { 00:14:20.158 "io_mechanism": "io_uring_cmd", 00:14:20.158 "conserve_cpu": false, 00:14:20.158 "filename": "/dev/ng0n1", 00:14:20.158 "name": "xnvme_bdev" 00:14:20.158 }, 00:14:20.158 "method": "bdev_xnvme_create" 00:14:20.158 }, 00:14:20.158 { 00:14:20.158 "method": "bdev_wait_for_examine" 00:14:20.158 } 00:14:20.158 ] 00:14:20.158 } 00:14:20.158 ] 00:14:20.158 } 00:14:20.158 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:20.158 fio-3.35 00:14:20.158 Starting 1 thread 00:14:26.738 00:14:26.738 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72932: Fri Dec 13 22:55:04 2024 00:14:26.738 read: IOPS=40.0k, BW=156MiB/s (164MB/s)(781MiB/5001msec) 00:14:26.738 slat (usec): min=2, max=140, avg= 3.74, stdev= 2.17 00:14:26.738 clat (usec): min=556, max=3483, avg=1450.18, stdev=348.37 00:14:26.738 lat (usec): min=560, max=3510, avg=1453.92, stdev=348.73 00:14:26.738 clat percentiles (usec): 00:14:26.738 | 1.00th=[ 701], 5.00th=[ 848], 10.00th=[ 988], 20.00th=[ 1156], 00:14:26.738 | 30.00th=[ 1270], 40.00th=[ 1369], 50.00th=[ 1467], 60.00th=[ 1549], 00:14:26.738 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1860], 95.00th=[ 1991], 00:14:26.738 | 99.00th=[ 2343], 99.50th=[ 2474], 99.90th=[ 2769], 99.95th=[ 2900], 00:14:26.738 | 99.99th=[ 3294] 00:14:26.738 bw ( KiB/s): min=146944, max=197632, per=100.00%, avg=160938.67, stdev=15925.59, samples=9 00:14:26.738 iops : min=36736, max=49408, avg=40234.67, stdev=3981.40, samples=9 00:14:26.738 lat (usec) : 750=2.20%, 1000=8.31% 00:14:26.738 lat (msec) : 2=84.58%, 4=4.90% 00:14:26.738 cpu : usr=38.32%, sys=60.56%, ctx=21, majf=0, minf=762 00:14:26.738 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:26.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.738 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:26.738 issued rwts: total=199936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.738 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:26.738 00:14:26.738 Run status group 0 (all jobs): 00:14:26.738 READ: bw=156MiB/s (164MB/s), 156MiB/s-156MiB/s (164MB/s-164MB/s), io=781MiB (819MB), run=5001-5001msec 00:14:26.738 ----------------------------------------------------- 00:14:26.738 Suppressions used: 00:14:26.738 count bytes template 00:14:26.738 1 11 /usr/src/fio/parse.c 00:14:26.738 1 8 libtcmalloc_minimal.so 00:14:26.738 1 904 libcrypto.so 00:14:26.738 ----------------------------------------------------- 00:14:26.738 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:26.738 22:55:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.738 { 00:14:26.738 "subsystems": [ 00:14:26.738 { 00:14:26.738 "subsystem": "bdev", 00:14:26.738 "config": [ 00:14:26.738 { 00:14:26.738 "params": { 00:14:26.738 "io_mechanism": "io_uring_cmd", 00:14:26.738 "conserve_cpu": false, 00:14:26.738 "filename": "/dev/ng0n1", 00:14:26.738 "name": "xnvme_bdev" 00:14:26.738 }, 00:14:26.738 "method": "bdev_xnvme_create" 00:14:26.738 }, 00:14:26.738 { 00:14:26.738 "method": "bdev_wait_for_examine" 00:14:26.738 } 00:14:26.738 ] 00:14:26.738 } 00:14:26.738 ] 00:14:26.738 } 00:14:26.999 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:26.999 fio-3.35 00:14:26.999 Starting 1 thread 00:14:33.594 00:14:33.594 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73023: Fri Dec 13 22:55:11 2024 00:14:33.594 write: IOPS=38.3k, BW=150MiB/s (157MB/s)(749MiB/5001msec); 0 zone resets 00:14:33.594 slat (usec): min=2, max=399, avg= 4.01, stdev= 2.80 00:14:33.594 clat (usec): min=238, max=4289, avg=1509.67, stdev=305.42 00:14:33.594 lat (usec): min=242, max=4298, avg=1513.68, stdev=305.88 00:14:33.594 clat percentiles (usec): 00:14:33.594 | 1.00th=[ 930], 5.00th=[ 1057], 10.00th=[ 1139], 20.00th=[ 1254], 00:14:33.594 | 30.00th=[ 1336], 40.00th=[ 1418], 50.00th=[ 1500], 60.00th=[ 1565], 00:14:33.594 | 70.00th=[ 1647], 80.00th=[ 1745], 90.00th=[ 1876], 95.00th=[ 2024], 00:14:33.594 | 99.00th=[ 2409], 99.50th=[ 2573], 99.90th=[ 2933], 99.95th=[ 3163], 00:14:33.594 | 99.99th=[ 3818] 00:14:33.594 bw ( KiB/s): min=145216, max=163880, per=99.96%, avg=153266.67, stdev=7013.15, samples=9 00:14:33.594 iops : min=36304, max=40970, avg=38316.67, stdev=1753.29, samples=9 00:14:33.594 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=2.68% 00:14:33.594 lat (msec) : 2=91.60%, 4=5.68%, 10=0.01% 00:14:33.594 cpu : usr=37.38%, sys=61.06%, ctx=29, majf=0, minf=763 00:14:33.594 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:14:33.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.594 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:33.594 issued rwts: total=0,191689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.594 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:33.594 00:14:33.594 Run status group 0 (all jobs): 00:14:33.594 WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=749MiB (785MB), run=5001-5001msec 00:14:33.594 ----------------------------------------------------- 00:14:33.594 Suppressions used: 00:14:33.594 count bytes template 00:14:33.594 1 11 /usr/src/fio/parse.c 00:14:33.594 1 8 libtcmalloc_minimal.so 00:14:33.594 1 904 libcrypto.so 00:14:33.594 ----------------------------------------------------- 00:14:33.594 00:14:33.594 ************************************ 00:14:33.594 END TEST xnvme_fio_plugin 00:14:33.594 ************************************ 00:14:33.594 00:14:33.594 real 0m13.464s 00:14:33.594 user 0m6.426s 00:14:33.594 sys 0m6.580s 00:14:33.594 22:55:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.594 22:55:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:33.594 22:55:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:33.594 22:55:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:33.594 22:55:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:33.594 22:55:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:33.594 22:55:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:33.594 22:55:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.594 22:55:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.594 ************************************ 00:14:33.594 START TEST xnvme_rpc 00:14:33.594 ************************************ 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73103 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73103 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73103 ']' 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.594 22:55:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:33.594 [2024-12-13 22:55:12.667383] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:33.594 [2024-12-13 22:55:12.667502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73103 ] 00:14:33.855 [2024-12-13 22:55:12.820683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.855 [2024-12-13 22:55:12.916721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.427 xnvme_bdev 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.427 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73103 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73103 ']' 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73103 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73103 00:14:34.688 killing process with pid 73103 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73103' 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73103 00:14:34.688 22:55:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73103 00:14:36.069 00:14:36.069 real 0m2.589s 00:14:36.069 user 0m2.714s 00:14:36.069 sys 0m0.349s 00:14:36.069 ************************************ 00:14:36.069 END TEST xnvme_rpc 00:14:36.070 ************************************ 00:14:36.070 22:55:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.070 22:55:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.330 22:55:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:36.330 22:55:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:36.330 22:55:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.330 22:55:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:36.330 ************************************ 00:14:36.330 START TEST xnvme_bdevperf 00:14:36.330 ************************************ 00:14:36.330 22:55:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:36.330 22:55:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:36.330 22:55:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:36.330 22:55:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:36.330 22:55:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:36.330 22:55:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:36.330 22:55:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:36.330 22:55:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:36.330 { 00:14:36.330 "subsystems": [ 00:14:36.330 { 00:14:36.330 "subsystem": "bdev", 00:14:36.330 "config": [ 00:14:36.330 { 00:14:36.330 "params": { 00:14:36.330 "io_mechanism": "io_uring_cmd", 00:14:36.330 "conserve_cpu": true, 00:14:36.330 "filename": "/dev/ng0n1", 00:14:36.330 "name": "xnvme_bdev" 00:14:36.330 }, 00:14:36.330 "method": "bdev_xnvme_create" 00:14:36.330 }, 00:14:36.330 { 00:14:36.330 "method": "bdev_wait_for_examine" 00:14:36.330 } 00:14:36.330 ] 00:14:36.330 } 00:14:36.330 ] 00:14:36.330 } 00:14:36.330 [2024-12-13 22:55:15.305495] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:36.330 [2024-12-13 22:55:15.305609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73171 ] 00:14:36.330 [2024-12-13 22:55:15.466019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.590 [2024-12-13 22:55:15.564238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.851 Running I/O for 5 seconds... 00:14:38.735 41472.00 IOPS, 162.00 MiB/s [2024-12-13T22:55:18.817Z] 41408.00 IOPS, 161.75 MiB/s [2024-12-13T22:55:20.203Z] 41344.00 IOPS, 161.50 MiB/s [2024-12-13T22:55:21.147Z] 41296.00 IOPS, 161.31 MiB/s [2024-12-13T22:55:21.147Z] 41215.80 IOPS, 161.00 MiB/s 00:14:42.007 Latency(us) 00:14:42.007 [2024-12-13T22:55:21.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.007 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:42.007 xnvme_bdev : 5.01 41189.41 160.90 0.00 0.00 1549.93 753.03 6427.57 00:14:42.007 [2024-12-13T22:55:21.147Z] =================================================================================================================== 00:14:42.007 [2024-12-13T22:55:21.147Z] Total : 41189.41 160.90 0.00 0.00 1549.93 753.03 6427.57 00:14:42.579 22:55:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:42.579 22:55:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:42.579 22:55:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:42.579 22:55:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:42.579 22:55:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:42.579 { 00:14:42.579 "subsystems": [ 00:14:42.579 { 00:14:42.579 "subsystem": "bdev", 00:14:42.579 "config": [ 00:14:42.579 { 00:14:42.579 "params": { 00:14:42.579 "io_mechanism": "io_uring_cmd", 00:14:42.579 "conserve_cpu": true, 00:14:42.579 "filename": "/dev/ng0n1", 00:14:42.579 "name": "xnvme_bdev" 00:14:42.579 }, 00:14:42.579 "method": "bdev_xnvme_create" 00:14:42.579 }, 00:14:42.579 { 00:14:42.579 "method": "bdev_wait_for_examine" 00:14:42.579 } 00:14:42.579 ] 00:14:42.579 } 00:14:42.579 ] 00:14:42.579 } 00:14:42.579 [2024-12-13 22:55:21.622855] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:42.579 [2024-12-13 22:55:21.622970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73244 ] 00:14:42.839 [2024-12-13 22:55:21.781362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.839 [2024-12-13 22:55:21.878323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.100 Running I/O for 5 seconds... 00:14:45.431 30386.00 IOPS, 118.70 MiB/s [2024-12-13T22:55:25.515Z] 25468.00 IOPS, 99.48 MiB/s [2024-12-13T22:55:26.512Z] 24376.67 IOPS, 95.22 MiB/s [2024-12-13T22:55:27.455Z] 24683.50 IOPS, 96.42 MiB/s [2024-12-13T22:55:27.455Z] 27086.00 IOPS, 105.80 MiB/s 00:14:48.315 Latency(us) 00:14:48.315 [2024-12-13T22:55:27.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.315 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:48.315 xnvme_bdev : 5.01 27064.19 105.72 0.00 0.00 2360.31 74.83 70173.93 00:14:48.315 [2024-12-13T22:55:27.455Z] =================================================================================================================== 00:14:48.315 [2024-12-13T22:55:27.455Z] Total : 27064.19 105.72 0.00 0.00 2360.31 74.83 70173.93 00:14:48.888 22:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:48.888 22:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:48.888 22:55:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:48.888 22:55:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:48.888 22:55:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:48.888 { 00:14:48.888 "subsystems": [ 00:14:48.888 { 00:14:48.888 "subsystem": "bdev", 00:14:48.888 "config": [ 00:14:48.888 { 00:14:48.888 "params": { 00:14:48.888 "io_mechanism": "io_uring_cmd", 00:14:48.888 "conserve_cpu": true, 00:14:48.888 "filename": "/dev/ng0n1", 00:14:48.888 "name": "xnvme_bdev" 00:14:48.888 }, 00:14:48.888 "method": "bdev_xnvme_create" 00:14:48.888 }, 00:14:48.888 { 00:14:48.888 "method": "bdev_wait_for_examine" 00:14:48.888 } 00:14:48.888 ] 00:14:48.888 } 00:14:48.888 ] 00:14:48.888 } 00:14:49.149 [2024-12-13 22:55:28.028107] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:49.149 [2024-12-13 22:55:28.028477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73320 ] 00:14:49.149 [2024-12-13 22:55:28.194296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.411 [2024-12-13 22:55:28.316996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.672 Running I/O for 5 seconds... 00:14:51.560 78848.00 IOPS, 308.00 MiB/s [2024-12-13T22:55:31.642Z] 78464.00 IOPS, 306.50 MiB/s [2024-12-13T22:55:33.030Z] 78784.00 IOPS, 307.75 MiB/s [2024-12-13T22:55:33.973Z] 79456.00 IOPS, 310.38 MiB/s 00:14:54.833 Latency(us) 00:14:54.833 [2024-12-13T22:55:33.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.833 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:54.833 xnvme_bdev : 5.00 79982.19 312.43 0.00 0.00 796.76 456.86 2873.50 00:14:54.833 [2024-12-13T22:55:33.973Z] =================================================================================================================== 00:14:54.833 [2024-12-13T22:55:33.973Z] Total : 79982.19 312.43 0.00 0.00 796.76 456.86 2873.50 00:14:55.403 22:55:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:55.403 22:55:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:55.403 22:55:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:55.403 22:55:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:55.403 22:55:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:55.403 { 00:14:55.403 "subsystems": [ 00:14:55.403 { 00:14:55.403 "subsystem": "bdev", 00:14:55.403 "config": [ 00:14:55.403 { 00:14:55.403 "params": { 00:14:55.403 "io_mechanism": "io_uring_cmd", 00:14:55.403 "conserve_cpu": true, 00:14:55.403 "filename": "/dev/ng0n1", 00:14:55.403 "name": "xnvme_bdev" 00:14:55.403 }, 00:14:55.403 "method": "bdev_xnvme_create" 00:14:55.403 }, 00:14:55.403 { 00:14:55.403 "method": "bdev_wait_for_examine" 00:14:55.403 } 00:14:55.403 ] 00:14:55.403 } 00:14:55.403 ] 00:14:55.403 } 00:14:55.403 [2024-12-13 22:55:34.371427] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:55.403 [2024-12-13 22:55:34.371532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73394 ] 00:14:55.403 [2024-12-13 22:55:34.526128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.662 [2024-12-13 22:55:34.600270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.662 Running I/O for 5 seconds... 00:14:57.973 54766.00 IOPS, 213.93 MiB/s [2024-12-13T22:55:38.052Z] 52486.50 IOPS, 205.03 MiB/s [2024-12-13T22:55:38.995Z] 51004.67 IOPS, 199.24 MiB/s [2024-12-13T22:55:39.936Z] 48893.50 IOPS, 190.99 MiB/s [2024-12-13T22:55:39.936Z] 46887.60 IOPS, 183.15 MiB/s 00:15:00.796 Latency(us) 00:15:00.796 [2024-12-13T22:55:39.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.796 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:00.796 xnvme_bdev : 5.01 46834.36 182.95 0.00 0.00 1360.45 51.20 20467.40 00:15:00.796 [2024-12-13T22:55:39.936Z] =================================================================================================================== 00:15:00.796 [2024-12-13T22:55:39.936Z] Total : 46834.36 182.95 0.00 0.00 1360.45 51.20 20467.40 00:15:01.741 00:15:01.741 real 0m25.351s 00:15:01.741 user 0m17.469s 00:15:01.741 sys 0m5.564s 00:15:01.741 ************************************ 00:15:01.741 END TEST xnvme_bdevperf 00:15:01.741 ************************************ 00:15:01.741 22:55:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.741 22:55:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:01.741 22:55:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:01.741 22:55:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.741 22:55:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.741 22:55:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.741 ************************************ 00:15:01.741 START TEST xnvme_fio_plugin 00:15:01.741 ************************************ 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.741 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:01.742 22:55:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.742 { 00:15:01.742 "subsystems": [ 00:15:01.742 { 00:15:01.742 "subsystem": "bdev", 00:15:01.742 "config": [ 00:15:01.742 { 00:15:01.742 "params": { 00:15:01.742 "io_mechanism": "io_uring_cmd", 00:15:01.742 "conserve_cpu": true, 00:15:01.742 "filename": "/dev/ng0n1", 00:15:01.742 "name": "xnvme_bdev" 00:15:01.742 }, 00:15:01.742 "method": "bdev_xnvme_create" 00:15:01.742 }, 00:15:01.742 { 00:15:01.742 "method": "bdev_wait_for_examine" 00:15:01.742 } 00:15:01.742 ] 00:15:01.742 } 00:15:01.742 ] 00:15:01.742 } 00:15:01.742 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:01.742 fio-3.35 00:15:01.742 Starting 1 thread 00:15:08.331 00:15:08.331 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73508: Fri Dec 13 22:55:46 2024 00:15:08.331 read: IOPS=39.2k, BW=153MiB/s (161MB/s)(766MiB/5001msec) 00:15:08.331 slat (usec): min=2, max=283, avg= 3.64, stdev= 2.04 00:15:08.331 clat (usec): min=775, max=3104, avg=1487.07, stdev=302.22 00:15:08.331 lat (usec): min=778, max=3152, avg=1490.71, stdev=302.66 00:15:08.331 clat percentiles (usec): 00:15:08.331 | 1.00th=[ 938], 5.00th=[ 1057], 10.00th=[ 1139], 20.00th=[ 1237], 00:15:08.331 | 30.00th=[ 1319], 40.00th=[ 1385], 50.00th=[ 1450], 60.00th=[ 1516], 00:15:08.331 | 70.00th=[ 1598], 80.00th=[ 1713], 90.00th=[ 1893], 95.00th=[ 2040], 00:15:08.331 | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 2737], 99.95th=[ 2835], 00:15:08.331 | 99.99th=[ 2999] 00:15:08.331 bw ( KiB/s): min=140800, max=166400, per=99.72%, avg=156501.33, stdev=9653.61, samples=9 00:15:08.331 iops : min=35200, max=41600, avg=39125.33, stdev=2413.40, samples=9 00:15:08.331 lat (usec) : 1000=2.41% 00:15:08.331 lat (msec) : 2=91.46%, 4=6.14% 00:15:08.331 cpu : usr=65.08%, sys=32.14%, ctx=54, majf=0, minf=762 00:15:08.331 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:08.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.331 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:08.331 issued rwts: total=196212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.331 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:08.331 00:15:08.331 Run status group 0 (all jobs): 00:15:08.331 READ: bw=153MiB/s (161MB/s), 153MiB/s-153MiB/s (161MB/s-161MB/s), io=766MiB (804MB), run=5001-5001msec 00:15:08.331 ----------------------------------------------------- 00:15:08.331 Suppressions used: 00:15:08.331 count bytes template 00:15:08.331 1 11 /usr/src/fio/parse.c 00:15:08.331 1 8 libtcmalloc_minimal.so 00:15:08.331 1 904 libcrypto.so 00:15:08.331 ----------------------------------------------------- 00:15:08.331 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:08.331 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:08.332 22:55:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.332 { 00:15:08.332 "subsystems": [ 00:15:08.332 { 00:15:08.332 "subsystem": "bdev", 00:15:08.332 "config": [ 00:15:08.332 { 00:15:08.332 "params": { 00:15:08.332 "io_mechanism": "io_uring_cmd", 00:15:08.332 "conserve_cpu": true, 00:15:08.332 "filename": "/dev/ng0n1", 00:15:08.332 "name": "xnvme_bdev" 00:15:08.332 }, 00:15:08.332 "method": "bdev_xnvme_create" 00:15:08.332 }, 00:15:08.332 { 00:15:08.332 "method": "bdev_wait_for_examine" 00:15:08.332 } 00:15:08.332 ] 00:15:08.332 } 00:15:08.332 ] 00:15:08.332 } 00:15:08.592 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:08.592 fio-3.35 00:15:08.592 Starting 1 thread 00:15:15.177 00:15:15.177 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73597: Fri Dec 13 22:55:53 2024 00:15:15.177 write: IOPS=41.5k, BW=162MiB/s (170MB/s)(811MiB/5001msec); 0 zone resets 00:15:15.177 slat (usec): min=2, max=233, avg= 4.25, stdev= 2.65 00:15:15.177 clat (usec): min=153, max=4992, avg=1373.96, stdev=300.56 00:15:15.177 lat (usec): min=226, max=4996, avg=1378.21, stdev=301.33 00:15:15.177 clat percentiles (usec): 00:15:15.177 | 1.00th=[ 873], 5.00th=[ 979], 10.00th=[ 1045], 20.00th=[ 1123], 00:15:15.177 | 30.00th=[ 1205], 40.00th=[ 1270], 50.00th=[ 1336], 60.00th=[ 1401], 00:15:15.177 | 70.00th=[ 1483], 80.00th=[ 1598], 90.00th=[ 1745], 95.00th=[ 1893], 00:15:15.177 | 99.00th=[ 2278], 99.50th=[ 2540], 99.90th=[ 3294], 99.95th=[ 3687], 00:15:15.177 | 99.99th=[ 4424] 00:15:15.177 bw ( KiB/s): min=145336, max=171912, per=99.24%, avg=164806.22, stdev=7797.10, samples=9 00:15:15.177 iops : min=36334, max=42978, avg=41201.56, stdev=1949.28, samples=9 00:15:15.177 lat (usec) : 250=0.01%, 500=0.01%, 750=0.12%, 1000=6.36% 00:15:15.177 lat (msec) : 2=90.57%, 4=2.89%, 10=0.03% 00:15:15.177 cpu : usr=57.34%, sys=38.50%, ctx=18, majf=0, minf=763 00:15:15.177 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.6% 00:15:15.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.177 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:15.177 issued rwts: total=0,207631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:15.177 00:15:15.177 Run status group 0 (all jobs): 00:15:15.177 WRITE: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=811MiB (850MB), run=5001-5001msec 00:15:15.177 ----------------------------------------------------- 00:15:15.177 Suppressions used: 00:15:15.177 count bytes template 00:15:15.177 1 11 /usr/src/fio/parse.c 00:15:15.177 1 8 libtcmalloc_minimal.so 00:15:15.177 1 904 libcrypto.so 00:15:15.177 ----------------------------------------------------- 00:15:15.177 00:15:15.177 ************************************ 00:15:15.177 END TEST xnvme_fio_plugin 00:15:15.177 ************************************ 00:15:15.177 00:15:15.177 real 0m13.494s 00:15:15.177 user 0m8.777s 00:15:15.177 sys 0m4.046s 00:15:15.177 22:55:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.177 22:55:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:15.177 22:55:54 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73103 00:15:15.177 22:55:54 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73103 ']' 00:15:15.177 Process with pid 73103 is not found 00:15:15.177 22:55:54 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73103 00:15:15.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73103) - No such process 00:15:15.177 22:55:54 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73103 is not found' 00:15:15.177 ************************************ 00:15:15.177 END TEST nvme_xnvme 00:15:15.177 ************************************ 00:15:15.177 00:15:15.177 real 3m28.140s 00:15:15.177 user 1m56.459s 00:15:15.177 sys 1m16.466s 00:15:15.177 22:55:54 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.177 22:55:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.177 22:55:54 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:15.177 22:55:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:15.177 22:55:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.177 22:55:54 -- common/autotest_common.sh@10 -- # set +x 00:15:15.177 ************************************ 00:15:15.177 START TEST blockdev_xnvme 00:15:15.177 ************************************ 00:15:15.177 22:55:54 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:15.472 * Looking for test storage... 00:15:15.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:15.472 22:55:54 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:15.472 22:55:54 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:15.472 22:55:54 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:15.472 22:55:54 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:15.472 22:55:54 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:15.472 22:55:54 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.472 22:55:54 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:15.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.472 --rc genhtml_branch_coverage=1 00:15:15.472 --rc genhtml_function_coverage=1 00:15:15.472 --rc genhtml_legend=1 00:15:15.472 --rc geninfo_all_blocks=1 00:15:15.472 --rc geninfo_unexecuted_blocks=1 00:15:15.472 00:15:15.472 ' 00:15:15.472 22:55:54 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:15.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.472 --rc genhtml_branch_coverage=1 00:15:15.472 --rc genhtml_function_coverage=1 00:15:15.472 --rc genhtml_legend=1 00:15:15.472 --rc geninfo_all_blocks=1 00:15:15.472 --rc geninfo_unexecuted_blocks=1 00:15:15.472 00:15:15.472 ' 00:15:15.472 22:55:54 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:15.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.472 --rc genhtml_branch_coverage=1 00:15:15.472 --rc genhtml_function_coverage=1 00:15:15.472 --rc genhtml_legend=1 00:15:15.472 --rc geninfo_all_blocks=1 00:15:15.472 --rc geninfo_unexecuted_blocks=1 00:15:15.472 00:15:15.472 ' 00:15:15.472 22:55:54 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:15.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.472 --rc genhtml_branch_coverage=1 00:15:15.472 --rc genhtml_function_coverage=1 00:15:15.472 --rc genhtml_legend=1 00:15:15.472 --rc geninfo_all_blocks=1 00:15:15.472 --rc geninfo_unexecuted_blocks=1 00:15:15.472 00:15:15.472 ' 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:15.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:15.472 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73727 00:15:15.473 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:15.473 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73727 00:15:15.473 22:55:54 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73727 ']' 00:15:15.473 22:55:54 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.473 22:55:54 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.473 22:55:54 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.473 22:55:54 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.473 22:55:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.473 22:55:54 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:15.473 [2024-12-13 22:55:54.498446] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:15.473 [2024-12-13 22:55:54.499329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73727 ] 00:15:15.752 [2024-12-13 22:55:54.662351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.752 [2024-12-13 22:55:54.793840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.324 22:55:55 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.324 22:55:55 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:16.324 22:55:55 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:16.324 22:55:55 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:16.324 22:55:55 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:16.324 22:55:55 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:16.324 22:55:55 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:16.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:17.468 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:17.468 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:17.468 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:17.468 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:17.468 nvme0n1 00:15:17.468 nvme0n2 00:15:17.468 nvme0n3 00:15:17.468 nvme1n1 00:15:17.468 nvme2n1 00:15:17.468 nvme3n1 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.468 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:17.468 22:55:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.469 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:17.469 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:17.469 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "27cf5ffc-90b4-45ed-8f95-23d75740a93e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "27cf5ffc-90b4-45ed-8f95-23d75740a93e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "51538fb2-b5b2-4a72-911d-085b7cfd18c3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "51538fb2-b5b2-4a72-911d-085b7cfd18c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "7686a7f2-ab78-4175-bdc0-a8b452c5b166"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7686a7f2-ab78-4175-bdc0-a8b452c5b166",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "449e6762-99e1-47bb-8d39-f0aa43f7e726"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "449e6762-99e1-47bb-8d39-f0aa43f7e726",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "db9de1ea-b93b-4bc0-a2f4-f2de455af491"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "db9de1ea-b93b-4bc0-a2f4-f2de455af491",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8fe1b783-9135-4c49-a870-55a91bbf5a8d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8fe1b783-9135-4c49-a870-55a91bbf5a8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:17.469 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:17.469 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:17.469 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:17.469 22:55:56 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73727 00:15:17.469 22:55:56 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73727 ']' 00:15:17.469 22:55:56 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73727 00:15:17.469 22:55:56 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:17.469 22:55:56 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.469 22:55:56 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73727 00:15:17.730 killing process with pid 73727 00:15:17.730 22:55:56 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.730 22:55:56 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.730 22:55:56 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73727' 00:15:17.730 22:55:56 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73727 00:15:17.730 22:55:56 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73727 00:15:19.117 22:55:58 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:19.117 22:55:58 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:19.117 22:55:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:19.117 22:55:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.117 22:55:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.117 ************************************ 00:15:19.117 START TEST bdev_hello_world 00:15:19.117 ************************************ 00:15:19.117 22:55:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:19.378 [2024-12-13 22:55:58.316070] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:19.379 [2024-12-13 22:55:58.316317] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74011 ] 00:15:19.379 [2024-12-13 22:55:58.478186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.641 [2024-12-13 22:55:58.604015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.902 [2024-12-13 22:55:59.008526] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:19.902 [2024-12-13 22:55:59.008831] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:19.902 [2024-12-13 22:55:59.008861] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:19.902 [2024-12-13 22:55:59.011013] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:19.902 [2024-12-13 22:55:59.011986] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:19.902 [2024-12-13 22:55:59.012044] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:19.902 [2024-12-13 22:55:59.012585] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:19.902 00:15:19.902 [2024-12-13 22:55:59.012616] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:20.845 00:15:20.845 real 0m1.535s 00:15:20.845 user 0m1.199s 00:15:20.845 sys 0m0.188s 00:15:20.845 22:55:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.845 ************************************ 00:15:20.845 END TEST bdev_hello_world 00:15:20.845 ************************************ 00:15:20.845 22:55:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:20.845 22:55:59 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:20.845 22:55:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:20.845 22:55:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.845 22:55:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.845 ************************************ 00:15:20.845 START TEST bdev_bounds 00:15:20.845 ************************************ 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:20.845 Process bdevio pid: 74049 00:15:20.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74049 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74049' 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74049 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74049 ']' 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:20.845 22:55:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:20.845 [2024-12-13 22:55:59.912309] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:20.845 [2024-12-13 22:55:59.912439] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74049 ] 00:15:21.107 [2024-12-13 22:56:00.075530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:21.107 [2024-12-13 22:56:00.207046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.107 [2024-12-13 22:56:00.207366] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.107 [2024-12-13 22:56:00.207411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.681 22:56:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.681 22:56:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:21.681 22:56:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:21.942 I/O targets: 00:15:21.942 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:21.942 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:21.942 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:21.942 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:21.942 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:21.942 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:21.942 00:15:21.942 00:15:21.942 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.942 http://cunit.sourceforge.net/ 00:15:21.942 00:15:21.942 00:15:21.942 Suite: bdevio tests on: nvme3n1 00:15:21.942 Test: blockdev write read block ...passed 00:15:21.942 Test: blockdev write zeroes read block ...passed 00:15:21.942 Test: blockdev write zeroes read no split ...passed 00:15:21.942 Test: blockdev write zeroes read split ...passed 00:15:21.942 Test: blockdev write zeroes read split partial ...passed 00:15:21.942 Test: blockdev reset ...passed 00:15:21.942 Test: blockdev write read 8 blocks ...passed 00:15:21.942 Test: blockdev write read size > 128k ...passed 00:15:21.942 Test: blockdev write read invalid size ...passed 00:15:21.942 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:21.942 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:21.942 Test: blockdev write read max offset ...passed 00:15:21.942 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:21.942 Test: blockdev writev readv 8 blocks ...passed 00:15:21.942 Test: blockdev writev readv 30 x 1block ...passed 00:15:21.942 Test: blockdev writev readv block ...passed 00:15:21.942 Test: blockdev writev readv size > 128k ...passed 00:15:21.942 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:21.942 Test: blockdev comparev and writev ...passed 00:15:21.942 Test: blockdev nvme passthru rw ...passed 00:15:21.942 Test: blockdev nvme passthru vendor specific ...passed 00:15:21.942 Test: blockdev nvme admin passthru ...passed 00:15:21.942 Test: blockdev copy ...passed 00:15:21.942 Suite: bdevio tests on: nvme2n1 00:15:21.942 Test: blockdev write read block ...passed 00:15:21.942 Test: blockdev write zeroes read block ...passed 00:15:21.942 Test: blockdev write zeroes read no split ...passed 00:15:21.942 Test: blockdev write zeroes read split ...passed 00:15:21.942 Test: blockdev write zeroes read split partial ...passed 00:15:21.942 Test: blockdev reset ...passed 00:15:21.942 Test: blockdev write read 8 blocks ...passed 00:15:21.942 Test: blockdev write read size > 128k ...passed 00:15:21.942 Test: blockdev write read invalid size ...passed 00:15:21.942 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:21.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:21.943 Test: blockdev write read max offset ...passed 00:15:21.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:21.943 Test: blockdev writev readv 8 blocks ...passed 00:15:21.943 Test: blockdev writev readv 30 x 1block ...passed 00:15:21.943 Test: blockdev writev readv block ...passed 00:15:21.943 Test: blockdev writev readv size > 128k ...passed 00:15:21.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:21.943 Test: blockdev comparev and writev ...passed 00:15:21.943 Test: blockdev nvme passthru rw ...passed 00:15:21.943 Test: blockdev nvme passthru vendor specific ...passed 00:15:21.943 Test: blockdev nvme admin passthru ...passed 00:15:21.943 Test: blockdev copy ...passed 00:15:21.943 Suite: bdevio tests on: nvme1n1 00:15:21.943 Test: blockdev write read block ...passed 00:15:21.943 Test: blockdev write zeroes read block ...passed 00:15:21.943 Test: blockdev write zeroes read no split ...passed 00:15:21.943 Test: blockdev write zeroes read split ...passed 00:15:22.204 Test: blockdev write zeroes read split partial ...passed 00:15:22.204 Test: blockdev reset ...passed 00:15:22.204 Test: blockdev write read 8 blocks ...passed 00:15:22.204 Test: blockdev write read size > 128k ...passed 00:15:22.204 Test: blockdev write read invalid size ...passed 00:15:22.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:22.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:22.204 Test: blockdev write read max offset ...passed 00:15:22.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:22.204 Test: blockdev writev readv 8 blocks ...passed 00:15:22.204 Test: blockdev writev readv 30 x 1block ...passed 00:15:22.204 Test: blockdev writev readv block ...passed 00:15:22.204 Test: blockdev writev readv size > 128k ...passed 00:15:22.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:22.204 Test: blockdev comparev and writev ...passed 00:15:22.204 Test: blockdev nvme passthru rw ...passed 00:15:22.204 Test: blockdev nvme passthru vendor specific ...passed 00:15:22.204 Test: blockdev nvme admin passthru ...passed 00:15:22.204 Test: blockdev copy ...passed 00:15:22.204 Suite: bdevio tests on: nvme0n3 00:15:22.204 Test: blockdev write read block ...passed 00:15:22.204 Test: blockdev write zeroes read block ...passed 00:15:22.204 Test: blockdev write zeroes read no split ...passed 00:15:22.204 Test: blockdev write zeroes read split ...passed 00:15:22.204 Test: blockdev write zeroes read split partial ...passed 00:15:22.204 Test: blockdev reset ...passed 00:15:22.204 Test: blockdev write read 8 blocks ...passed 00:15:22.204 Test: blockdev write read size > 128k ...passed 00:15:22.204 Test: blockdev write read invalid size ...passed 00:15:22.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:22.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:22.204 Test: blockdev write read max offset ...passed 00:15:22.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:22.204 Test: blockdev writev readv 8 blocks ...passed 00:15:22.204 Test: blockdev writev readv 30 x 1block ...passed 00:15:22.204 Test: blockdev writev readv block ...passed 00:15:22.204 Test: blockdev writev readv size > 128k ...passed 00:15:22.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:22.205 Test: blockdev comparev and writev ...passed 00:15:22.205 Test: blockdev nvme passthru rw ...passed 00:15:22.205 Test: blockdev nvme passthru vendor specific ...passed 00:15:22.205 Test: blockdev nvme admin passthru ...passed 00:15:22.205 Test: blockdev copy ...passed 00:15:22.205 Suite: bdevio tests on: nvme0n2 00:15:22.205 Test: blockdev write read block ...passed 00:15:22.205 Test: blockdev write zeroes read block ...passed 00:15:22.205 Test: blockdev write zeroes read no split ...passed 00:15:22.205 Test: blockdev write zeroes read split ...passed 00:15:22.205 Test: blockdev write zeroes read split partial ...passed 00:15:22.205 Test: blockdev reset ...passed 00:15:22.205 Test: blockdev write read 8 blocks ...passed 00:15:22.205 Test: blockdev write read size > 128k ...passed 00:15:22.205 Test: blockdev write read invalid size ...passed 00:15:22.205 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:22.205 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:22.205 Test: blockdev write read max offset ...passed 00:15:22.205 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:22.205 Test: blockdev writev readv 8 blocks ...passed 00:15:22.205 Test: blockdev writev readv 30 x 1block ...passed 00:15:22.205 Test: blockdev writev readv block ...passed 00:15:22.205 Test: blockdev writev readv size > 128k ...passed 00:15:22.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:22.205 Test: blockdev comparev and writev ...passed 00:15:22.205 Test: blockdev nvme passthru rw ...passed 00:15:22.205 Test: blockdev nvme passthru vendor specific ...passed 00:15:22.205 Test: blockdev nvme admin passthru ...passed 00:15:22.205 Test: blockdev copy ...passed 00:15:22.205 Suite: bdevio tests on: nvme0n1 00:15:22.205 Test: blockdev write read block ...passed 00:15:22.205 Test: blockdev write zeroes read block ...passed 00:15:22.205 Test: blockdev write zeroes read no split ...passed 00:15:22.205 Test: blockdev write zeroes read split ...passed 00:15:22.205 Test: blockdev write zeroes read split partial ...passed 00:15:22.205 Test: blockdev reset ...passed 00:15:22.205 Test: blockdev write read 8 blocks ...passed 00:15:22.205 Test: blockdev write read size > 128k ...passed 00:15:22.205 Test: blockdev write read invalid size ...passed 00:15:22.205 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:22.205 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:22.205 Test: blockdev write read max offset ...passed 00:15:22.205 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:22.205 Test: blockdev writev readv 8 blocks ...passed 00:15:22.205 Test: blockdev writev readv 30 x 1block ...passed 00:15:22.465 Test: blockdev writev readv block ...passed 00:15:22.465 Test: blockdev writev readv size > 128k ...passed 00:15:22.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:22.465 Test: blockdev comparev and writev ...passed 00:15:22.465 Test: blockdev nvme passthru rw ...passed 00:15:22.465 Test: blockdev nvme passthru vendor specific ...passed 00:15:22.465 Test: blockdev nvme admin passthru ...passed 00:15:22.465 Test: blockdev copy ...passed 00:15:22.465 00:15:22.465 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.465 suites 6 6 n/a 0 0 00:15:22.465 tests 138 138 138 0 0 00:15:22.465 asserts 780 780 780 0 n/a 00:15:22.465 00:15:22.465 Elapsed time = 1.259 seconds 00:15:22.465 0 00:15:22.465 22:56:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74049 00:15:22.465 22:56:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74049 ']' 00:15:22.465 22:56:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74049 00:15:22.465 22:56:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:22.465 22:56:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.465 22:56:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74049 00:15:22.465 22:56:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.465 22:56:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.465 killing process with pid 74049 00:15:22.465 22:56:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74049' 00:15:22.465 22:56:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74049 00:15:22.465 22:56:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74049 00:15:23.409 22:56:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:23.409 00:15:23.409 real 0m2.369s 00:15:23.409 user 0m5.701s 00:15:23.409 sys 0m0.369s 00:15:23.409 22:56:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.409 22:56:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:23.409 ************************************ 00:15:23.409 END TEST bdev_bounds 00:15:23.409 ************************************ 00:15:23.409 22:56:02 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:23.409 22:56:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:23.409 22:56:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.409 22:56:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.409 ************************************ 00:15:23.409 START TEST bdev_nbd 00:15:23.409 ************************************ 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:23.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74105 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74105 /var/tmp/spdk-nbd.sock 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74105 ']' 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.409 22:56:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:23.409 [2024-12-13 22:56:02.370710] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:23.409 [2024-12-13 22:56:02.371259] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.409 [2024-12-13 22:56:02.533717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.670 [2024-12-13 22:56:02.661071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:24.243 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.505 1+0 records in 00:15:24.505 1+0 records out 00:15:24.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105759 s, 3.9 MB/s 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:24.505 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.767 1+0 records in 00:15:24.767 1+0 records out 00:15:24.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112343 s, 3.6 MB/s 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:24.767 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.029 1+0 records in 00:15:25.029 1+0 records out 00:15:25.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00088718 s, 4.6 MB/s 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:25.029 22:56:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:25.029 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.290 1+0 records in 00:15:25.290 1+0 records out 00:15:25.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130265 s, 3.1 MB/s 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:25.290 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.551 1+0 records in 00:15:25.551 1+0 records out 00:15:25.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132961 s, 3.1 MB/s 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:25.551 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.813 1+0 records in 00:15:25.813 1+0 records out 00:15:25.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129254 s, 3.2 MB/s 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:25.813 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:26.076 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd0", 00:15:26.076 "bdev_name": "nvme0n1" 00:15:26.076 }, 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd1", 00:15:26.076 "bdev_name": "nvme0n2" 00:15:26.076 }, 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd2", 00:15:26.076 "bdev_name": "nvme0n3" 00:15:26.076 }, 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd3", 00:15:26.076 "bdev_name": "nvme1n1" 00:15:26.076 }, 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd4", 00:15:26.076 "bdev_name": "nvme2n1" 00:15:26.076 }, 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd5", 00:15:26.076 "bdev_name": "nvme3n1" 00:15:26.076 } 00:15:26.076 ]' 00:15:26.076 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:26.076 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd0", 00:15:26.076 "bdev_name": "nvme0n1" 00:15:26.076 }, 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd1", 00:15:26.076 "bdev_name": "nvme0n2" 00:15:26.076 }, 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd2", 00:15:26.076 "bdev_name": "nvme0n3" 00:15:26.076 }, 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd3", 00:15:26.076 "bdev_name": "nvme1n1" 00:15:26.076 }, 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd4", 00:15:26.076 "bdev_name": "nvme2n1" 00:15:26.076 }, 00:15:26.076 { 00:15:26.076 "nbd_device": "/dev/nbd5", 00:15:26.076 "bdev_name": "nvme3n1" 00:15:26.076 } 00:15:26.076 ]' 00:15:26.076 22:56:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:26.076 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:26.076 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:26.076 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:26.076 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:26.076 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:26.076 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.076 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.337 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:26.599 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:26.599 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:26.599 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:26.599 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.599 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.599 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:26.599 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.599 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.599 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.599 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:26.866 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:26.866 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:26.866 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:26.866 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.866 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.866 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:26.866 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.866 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.866 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.867 22:56:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:27.128 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:27.128 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:27.128 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:27.128 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.128 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.128 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:27.128 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:27.128 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.128 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.128 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:27.387 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:27.387 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:27.387 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:27.387 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.387 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.387 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:27.387 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:27.387 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.387 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:27.387 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.387 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:27.644 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:27.903 /dev/nbd0 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.903 1+0 records in 00:15:27.903 1+0 records out 00:15:27.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696679 s, 5.9 MB/s 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:27.903 22:56:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:27.903 /dev/nbd1 00:15:28.160 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:28.160 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:28.160 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:28.160 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.160 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.161 1+0 records in 00:15:28.161 1+0 records out 00:15:28.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755275 s, 5.4 MB/s 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:28.161 /dev/nbd10 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.161 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.161 1+0 records in 00:15:28.161 1+0 records out 00:15:28.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583678 s, 7.0 MB/s 00:15:28.418 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.418 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:28.419 /dev/nbd11 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.419 1+0 records in 00:15:28.419 1+0 records out 00:15:28.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503493 s, 8.1 MB/s 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.419 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:28.677 /dev/nbd12 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.677 1+0 records in 00:15:28.677 1+0 records out 00:15:28.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513469 s, 8.0 MB/s 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.677 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:28.935 /dev/nbd13 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.935 1+0 records in 00:15:28.935 1+0 records out 00:15:28.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354417 s, 11.6 MB/s 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.935 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.936 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.936 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:28.936 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.936 22:56:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd0", 00:15:29.194 "bdev_name": "nvme0n1" 00:15:29.194 }, 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd1", 00:15:29.194 "bdev_name": "nvme0n2" 00:15:29.194 }, 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd10", 00:15:29.194 "bdev_name": "nvme0n3" 00:15:29.194 }, 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd11", 00:15:29.194 "bdev_name": "nvme1n1" 00:15:29.194 }, 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd12", 00:15:29.194 "bdev_name": "nvme2n1" 00:15:29.194 }, 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd13", 00:15:29.194 "bdev_name": "nvme3n1" 00:15:29.194 } 00:15:29.194 ]' 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd0", 00:15:29.194 "bdev_name": "nvme0n1" 00:15:29.194 }, 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd1", 00:15:29.194 "bdev_name": "nvme0n2" 00:15:29.194 }, 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd10", 00:15:29.194 "bdev_name": "nvme0n3" 00:15:29.194 }, 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd11", 00:15:29.194 "bdev_name": "nvme1n1" 00:15:29.194 }, 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd12", 00:15:29.194 "bdev_name": "nvme2n1" 00:15:29.194 }, 00:15:29.194 { 00:15:29.194 "nbd_device": "/dev/nbd13", 00:15:29.194 "bdev_name": "nvme3n1" 00:15:29.194 } 00:15:29.194 ]' 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:29.194 /dev/nbd1 00:15:29.194 /dev/nbd10 00:15:29.194 /dev/nbd11 00:15:29.194 /dev/nbd12 00:15:29.194 /dev/nbd13' 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:29.194 /dev/nbd1 00:15:29.194 /dev/nbd10 00:15:29.194 /dev/nbd11 00:15:29.194 /dev/nbd12 00:15:29.194 /dev/nbd13' 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:29.194 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:29.194 256+0 records in 00:15:29.194 256+0 records out 00:15:29.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00762027 s, 138 MB/s 00:15:29.195 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:29.195 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:29.195 256+0 records in 00:15:29.195 256+0 records out 00:15:29.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0890559 s, 11.8 MB/s 00:15:29.195 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:29.195 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:29.456 256+0 records in 00:15:29.456 256+0 records out 00:15:29.456 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16514 s, 6.3 MB/s 00:15:29.456 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:29.456 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:29.717 256+0 records in 00:15:29.717 256+0 records out 00:15:29.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.202935 s, 5.2 MB/s 00:15:29.717 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:29.717 22:56:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:29.983 256+0 records in 00:15:29.983 256+0 records out 00:15:29.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.29986 s, 3.5 MB/s 00:15:29.983 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:29.983 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:30.291 256+0 records in 00:15:30.291 256+0 records out 00:15:30.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190211 s, 5.5 MB/s 00:15:30.291 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:30.291 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:30.291 256+0 records in 00:15:30.291 256+0 records out 00:15:30.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.198433 s, 5.3 MB/s 00:15:30.565 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:30.565 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:30.565 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:30.565 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:30.565 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:30.565 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:30.565 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:30.565 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.565 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.566 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:30.824 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:30.824 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:30.824 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:30.824 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.824 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.824 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:30.824 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.824 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.825 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.825 22:56:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:31.083 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:31.083 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:31.083 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:31.083 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.083 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.083 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:31.083 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.083 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.083 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.083 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:31.342 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:31.342 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:31.342 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:31.342 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.342 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.342 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:31.342 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.342 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.342 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.342 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.600 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:31.858 22:56:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:32.117 malloc_lvol_verify 00:15:32.117 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:32.375 cf3840bd-4bf2-4892-b109-71aed24dd4fc 00:15:32.375 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:32.633 a44f2730-08af-4f0d-830a-3652d8e074ac 00:15:32.633 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:32.633 /dev/nbd0 00:15:32.891 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:32.891 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:32.892 mke2fs 1.47.0 (5-Feb-2023) 00:15:32.892 Discarding device blocks: 0/4096 done 00:15:32.892 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:32.892 00:15:32.892 Allocating group tables: 0/1 done 00:15:32.892 Writing inode tables: 0/1 done 00:15:32.892 Creating journal (1024 blocks): done 00:15:32.892 Writing superblocks and filesystem accounting information: 0/1 done 00:15:32.892 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.892 22:56:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74105 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74105 ']' 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74105 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74105 00:15:32.892 killing process with pid 74105 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74105' 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74105 00:15:32.892 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74105 00:15:33.832 22:56:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:33.832 00:15:33.832 real 0m10.319s 00:15:33.832 user 0m14.191s 00:15:33.832 sys 0m3.520s 00:15:33.832 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.832 ************************************ 00:15:33.832 END TEST bdev_nbd 00:15:33.832 ************************************ 00:15:33.832 22:56:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:33.832 22:56:12 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:33.832 22:56:12 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:33.832 22:56:12 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:33.832 22:56:12 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:33.832 22:56:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:33.832 22:56:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.832 22:56:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:33.832 ************************************ 00:15:33.832 START TEST bdev_fio 00:15:33.832 ************************************ 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:33.832 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:33.832 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:33.833 ************************************ 00:15:33.833 START TEST bdev_fio_rw_verify 00:15:33.833 ************************************ 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:33.833 22:56:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:33.833 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:33.833 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:33.833 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:33.833 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:33.833 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:33.833 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:33.833 fio-3.35 00:15:33.833 Starting 6 threads 00:15:46.065 00:15:46.065 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74520: Fri Dec 13 22:56:24 2024 00:15:46.065 read: IOPS=18.5k, BW=72.1MiB/s (75.6MB/s)(721MiB/10002msec) 00:15:46.065 slat (usec): min=2, max=2772, avg= 5.56, stdev=14.72 00:15:46.065 clat (usec): min=65, max=6879, avg=1024.65, stdev=703.14 00:15:46.065 lat (usec): min=68, max=6884, avg=1030.20, stdev=703.68 00:15:46.065 clat percentiles (usec): 00:15:46.065 | 50.000th=[ 898], 99.000th=[ 3195], 99.900th=[ 4490], 99.990th=[ 6521], 00:15:46.065 | 99.999th=[ 6849] 00:15:46.065 write: IOPS=18.7k, BW=73.2MiB/s (76.7MB/s)(732MiB/10002msec); 0 zone resets 00:15:46.065 slat (usec): min=3, max=4437, avg=35.90, stdev=121.25 00:15:46.065 clat (usec): min=58, max=8925, avg=1261.10, stdev=773.06 00:15:46.065 lat (usec): min=86, max=8957, avg=1297.00, stdev=786.11 00:15:46.065 clat percentiles (usec): 00:15:46.065 | 50.000th=[ 1139], 99.000th=[ 3621], 99.900th=[ 4948], 99.990th=[ 6587], 00:15:46.065 | 99.999th=[ 8848] 00:15:46.065 bw ( KiB/s): min=49100, max=134432, per=100.00%, avg=75817.37, stdev=3823.44, samples=114 00:15:46.065 iops : min=12272, max=33608, avg=18953.68, stdev=955.89, samples=114 00:15:46.065 lat (usec) : 100=0.04%, 250=6.04%, 500=15.71%, 750=14.78%, 1000=12.35% 00:15:46.065 lat (msec) : 2=38.70%, 4=12.00%, 10=0.38% 00:15:46.065 cpu : usr=43.79%, sys=30.41%, ctx=6320, majf=0, minf=17409 00:15:46.065 IO depths : 1=11.5%, 2=23.9%, 4=51.0%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.065 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.065 issued rwts: total=184589,187349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.065 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:46.065 00:15:46.065 Run status group 0 (all jobs): 00:15:46.065 READ: bw=72.1MiB/s (75.6MB/s), 72.1MiB/s-72.1MiB/s (75.6MB/s-75.6MB/s), io=721MiB (756MB), run=10002-10002msec 00:15:46.065 WRITE: bw=73.2MiB/s (76.7MB/s), 73.2MiB/s-73.2MiB/s (76.7MB/s-76.7MB/s), io=732MiB (767MB), run=10002-10002msec 00:15:46.638 ----------------------------------------------------- 00:15:46.638 Suppressions used: 00:15:46.638 count bytes template 00:15:46.638 6 48 /usr/src/fio/parse.c 00:15:46.638 2622 251712 /usr/src/fio/iolog.c 00:15:46.638 1 8 libtcmalloc_minimal.so 00:15:46.638 1 904 libcrypto.so 00:15:46.638 ----------------------------------------------------- 00:15:46.638 00:15:46.638 00:15:46.638 real 0m12.873s 00:15:46.638 user 0m27.664s 00:15:46.638 sys 0m18.518s 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:46.638 ************************************ 00:15:46.638 END TEST bdev_fio_rw_verify 00:15:46.638 ************************************ 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "27cf5ffc-90b4-45ed-8f95-23d75740a93e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "27cf5ffc-90b4-45ed-8f95-23d75740a93e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "51538fb2-b5b2-4a72-911d-085b7cfd18c3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "51538fb2-b5b2-4a72-911d-085b7cfd18c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "7686a7f2-ab78-4175-bdc0-a8b452c5b166"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7686a7f2-ab78-4175-bdc0-a8b452c5b166",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "449e6762-99e1-47bb-8d39-f0aa43f7e726"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "449e6762-99e1-47bb-8d39-f0aa43f7e726",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "db9de1ea-b93b-4bc0-a2f4-f2de455af491"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "db9de1ea-b93b-4bc0-a2f4-f2de455af491",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8fe1b783-9135-4c49-a870-55a91bbf5a8d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8fe1b783-9135-4c49-a870-55a91bbf5a8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.638 /home/vagrant/spdk_repo/spdk 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:46.638 ************************************ 00:15:46.638 END TEST bdev_fio 00:15:46.638 ************************************ 00:15:46.638 00:15:46.638 real 0m13.040s 00:15:46.638 user 0m27.734s 00:15:46.638 sys 0m18.593s 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.638 22:56:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:46.638 22:56:25 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:46.639 22:56:25 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:46.639 22:56:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:46.639 22:56:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.639 22:56:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.639 ************************************ 00:15:46.639 START TEST bdev_verify 00:15:46.639 ************************************ 00:15:46.639 22:56:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:46.899 [2024-12-13 22:56:25.819336] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:46.899 [2024-12-13 22:56:25.819447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74696 ] 00:15:46.899 [2024-12-13 22:56:25.979032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:47.159 [2024-12-13 22:56:26.074983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.159 [2024-12-13 22:56:26.075089] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.420 Running I/O for 5 seconds... 00:15:49.749 21504.00 IOPS, 84.00 MiB/s [2024-12-13T22:56:29.831Z] 21424.00 IOPS, 83.69 MiB/s [2024-12-13T22:56:30.773Z] 22560.00 IOPS, 88.13 MiB/s [2024-12-13T22:56:31.784Z] 22768.00 IOPS, 88.94 MiB/s [2024-12-13T22:56:31.784Z] 23110.00 IOPS, 90.27 MiB/s 00:15:52.644 Latency(us) 00:15:52.644 [2024-12-13T22:56:31.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.644 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.644 Verification LBA range: start 0x0 length 0x80000 00:15:52.644 nvme0n1 : 5.05 1824.73 7.13 0.00 0.00 69994.88 11695.66 69367.34 00:15:52.644 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.644 Verification LBA range: start 0x80000 length 0x80000 00:15:52.644 nvme0n1 : 5.06 1872.54 7.31 0.00 0.00 68216.39 8015.56 66140.95 00:15:52.644 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.644 Verification LBA range: start 0x0 length 0x80000 00:15:52.644 nvme0n2 : 5.03 1807.29 7.06 0.00 0.00 70496.65 14014.62 62511.26 00:15:52.644 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.644 Verification LBA range: start 0x80000 length 0x80000 00:15:52.644 nvme0n2 : 5.06 1872.02 7.31 0.00 0.00 68099.29 9981.64 64124.46 00:15:52.644 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.644 Verification LBA range: start 0x0 length 0x80000 00:15:52.644 nvme0n3 : 5.04 1803.44 7.04 0.00 0.00 70475.07 8721.33 70980.53 00:15:52.644 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.644 Verification LBA range: start 0x80000 length 0x80000 00:15:52.645 nvme0n3 : 5.06 1871.47 7.31 0.00 0.00 67971.61 5646.18 70577.23 00:15:52.645 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.645 Verification LBA range: start 0x0 length 0xbd0bd 00:15:52.645 nvme1n1 : 5.06 2339.45 9.14 0.00 0.00 54110.72 4663.14 64124.46 00:15:52.645 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.645 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:52.645 nvme1n1 : 5.07 2477.81 9.68 0.00 0.00 51183.76 5999.06 58478.28 00:15:52.645 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.645 Verification LBA range: start 0x0 length 0x20000 00:15:52.645 nvme2n1 : 5.08 1865.83 7.29 0.00 0.00 67745.82 5797.42 61704.66 00:15:52.645 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.645 Verification LBA range: start 0x20000 length 0x20000 00:15:52.645 nvme2n1 : 5.08 1914.39 7.48 0.00 0.00 66177.62 4360.66 68560.74 00:15:52.645 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.645 Verification LBA range: start 0x0 length 0xa0000 00:15:52.645 nvme3n1 : 5.08 1763.27 6.89 0.00 0.00 71572.91 1506.07 93565.24 00:15:52.645 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:52.645 Verification LBA range: start 0xa0000 length 0xa0000 00:15:52.645 nvme3n1 : 5.08 1462.52 5.71 0.00 0.00 86415.65 6099.89 98001.53 00:15:52.645 [2024-12-13T22:56:31.785Z] =================================================================================================================== 00:15:52.645 [2024-12-13T22:56:31.785Z] Total : 22874.76 89.35 0.00 0.00 66609.64 1506.07 98001.53 00:15:53.216 00:15:53.216 real 0m6.556s 00:15:53.216 user 0m10.941s 00:15:53.216 sys 0m1.215s 00:15:53.216 22:56:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.216 22:56:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:53.216 ************************************ 00:15:53.216 END TEST bdev_verify 00:15:53.216 ************************************ 00:15:53.477 22:56:32 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:53.477 22:56:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:53.477 22:56:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.477 22:56:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.477 ************************************ 00:15:53.477 START TEST bdev_verify_big_io 00:15:53.477 ************************************ 00:15:53.477 22:56:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:53.477 [2024-12-13 22:56:32.440750] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:53.477 [2024-12-13 22:56:32.440876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74792 ] 00:15:53.477 [2024-12-13 22:56:32.602241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:53.737 [2024-12-13 22:56:32.699135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.737 [2024-12-13 22:56:32.699213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.307 Running I/O for 5 seconds... 00:16:00.418 1440.00 IOPS, 90.00 MiB/s [2024-12-13T22:56:39.558Z] 3018.50 IOPS, 188.66 MiB/s 00:16:00.418 Latency(us) 00:16:00.418 [2024-12-13T22:56:39.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.418 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.418 Verification LBA range: start 0x0 length 0x8000 00:16:00.418 nvme0n1 : 5.82 126.41 7.90 0.00 0.00 959662.80 33272.12 1187310.67 00:16:00.418 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.418 Verification LBA range: start 0x8000 length 0x8000 00:16:00.418 nvme0n1 : 5.73 111.65 6.98 0.00 0.00 1090696.74 112116.97 1251838.42 00:16:00.418 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.418 Verification LBA range: start 0x0 length 0x8000 00:16:00.418 nvme0n2 : 5.82 98.89 6.18 0.00 0.00 1198475.60 104051.00 1626099.40 00:16:00.418 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.418 Verification LBA range: start 0x8000 length 0x8000 00:16:00.418 nvme0n2 : 6.08 86.87 5.43 0.00 0.00 1366073.67 6956.90 1464780.01 00:16:00.418 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.418 Verification LBA range: start 0x0 length 0x8000 00:16:00.418 nvme0n3 : 5.83 107.07 6.69 0.00 0.00 1067545.89 127442.31 1716438.25 00:16:00.418 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.418 Verification LBA range: start 0x8000 length 0x8000 00:16:00.418 nvme0n3 : 5.93 105.22 6.58 0.00 0.00 1112619.18 3478.45 1580929.97 00:16:00.418 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.418 Verification LBA range: start 0x0 length 0xbd0b 00:16:00.418 nvme1n1 : 5.94 131.96 8.25 0.00 0.00 848217.68 89532.26 1303460.63 00:16:00.418 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.418 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:00.418 nvme1n1 : 5.97 142.02 8.88 0.00 0.00 791760.43 8015.56 1213121.77 00:16:00.419 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.419 Verification LBA range: start 0x0 length 0x2000 00:16:00.419 nvme2n1 : 5.94 115.88 7.24 0.00 0.00 926256.69 95985.03 1897115.96 00:16:00.419 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.419 Verification LBA range: start 0x2000 length 0x2000 00:16:00.419 nvme2n1 : 5.98 123.18 7.70 0.00 0.00 880356.98 40934.79 1258291.20 00:16:00.419 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:00.419 Verification LBA range: start 0x0 length 0xa000 00:16:00.419 nvme3n1 : 6.09 155.07 9.69 0.00 0.00 675234.37 604.95 1458327.24 00:16:00.419 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:00.419 Verification LBA range: start 0xa000 length 0xa000 00:16:00.419 nvme3n1 : 6.08 157.80 9.86 0.00 0.00 664741.83 642.76 967916.31 00:16:00.419 [2024-12-13T22:56:39.559Z] =================================================================================================================== 00:16:00.419 [2024-12-13T22:56:39.559Z] Total : 1462.02 91.38 0.00 0.00 929280.14 604.95 1897115.96 00:16:01.359 00:16:01.359 real 0m7.789s 00:16:01.359 user 0m14.419s 00:16:01.359 sys 0m0.375s 00:16:01.359 22:56:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.359 ************************************ 00:16:01.359 END TEST bdev_verify_big_io 00:16:01.359 ************************************ 00:16:01.359 22:56:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.359 22:56:40 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:01.359 22:56:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:01.359 22:56:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.359 22:56:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.359 ************************************ 00:16:01.359 START TEST bdev_write_zeroes 00:16:01.359 ************************************ 00:16:01.359 22:56:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:01.359 [2024-12-13 22:56:40.302261] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:01.359 [2024-12-13 22:56:40.302795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74900 ] 00:16:01.359 [2024-12-13 22:56:40.462576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.619 [2024-12-13 22:56:40.559419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.880 Running I/O for 1 seconds... 00:16:03.264 67291.00 IOPS, 262.86 MiB/s 00:16:03.264 Latency(us) 00:16:03.264 [2024-12-13T22:56:42.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.264 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.264 nvme0n1 : 1.04 10623.01 41.50 0.00 0.00 12034.42 4789.17 26819.35 00:16:03.264 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.264 nvme0n2 : 1.03 10613.81 41.46 0.00 0.00 12039.90 6654.42 27021.00 00:16:03.264 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.264 nvme0n3 : 1.03 10601.12 41.41 0.00 0.00 12046.27 6755.25 27424.30 00:16:03.264 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.264 nvme1n1 : 1.04 13261.36 51.80 0.00 0.00 9616.42 2545.82 24197.91 00:16:03.264 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.264 nvme2n1 : 1.03 10662.97 41.65 0.00 0.00 11928.93 4285.05 26012.75 00:16:03.264 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:03.264 nvme3n1 : 1.04 10557.03 41.24 0.00 0.00 11997.67 4133.81 27625.94 00:16:03.264 [2024-12-13T22:56:42.404Z] =================================================================================================================== 00:16:03.264 [2024-12-13T22:56:42.404Z] Total : 66319.29 259.06 0.00 0.00 11528.17 2545.82 27625.94 00:16:03.836 ************************************ 00:16:03.836 END TEST bdev_write_zeroes 00:16:03.836 00:16:03.836 real 0m2.481s 00:16:03.836 user 0m1.846s 00:16:03.836 sys 0m0.451s 00:16:03.836 22:56:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.836 22:56:42 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:03.836 ************************************ 00:16:03.836 22:56:42 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:03.836 22:56:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:03.836 22:56:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.836 22:56:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:03.836 ************************************ 00:16:03.836 START TEST bdev_json_nonenclosed 00:16:03.836 ************************************ 00:16:03.836 22:56:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:03.836 [2024-12-13 22:56:42.842669] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:03.836 [2024-12-13 22:56:42.842801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74955 ] 00:16:04.097 [2024-12-13 22:56:42.999897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.097 [2024-12-13 22:56:43.094641] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.097 [2024-12-13 22:56:43.094713] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:04.097 [2024-12-13 22:56:43.094729] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:04.097 [2024-12-13 22:56:43.094738] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:04.357 00:16:04.357 real 0m0.489s 00:16:04.357 user 0m0.289s 00:16:04.357 sys 0m0.096s 00:16:04.358 22:56:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.358 22:56:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:04.358 ************************************ 00:16:04.358 END TEST bdev_json_nonenclosed 00:16:04.358 ************************************ 00:16:04.358 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.358 22:56:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:04.358 22:56:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.358 22:56:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.358 ************************************ 00:16:04.358 START TEST bdev_json_nonarray 00:16:04.358 ************************************ 00:16:04.358 22:56:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:04.358 [2024-12-13 22:56:43.391880] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:04.358 [2024-12-13 22:56:43.391991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74975 ] 00:16:04.619 [2024-12-13 22:56:43.553278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.619 [2024-12-13 22:56:43.650367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.619 [2024-12-13 22:56:43.650447] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:04.619 [2024-12-13 22:56:43.650464] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:04.619 [2024-12-13 22:56:43.650473] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:04.880 00:16:04.880 real 0m0.498s 00:16:04.880 user 0m0.305s 00:16:04.880 sys 0m0.089s 00:16:04.880 22:56:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.880 ************************************ 00:16:04.880 END TEST bdev_json_nonarray 00:16:04.880 ************************************ 00:16:04.880 22:56:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:04.880 22:56:43 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:05.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:15.452 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.452 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.452 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.452 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.452 00:16:15.452 real 0m59.027s 00:16:15.452 user 1m21.569s 00:16:15.452 sys 0m43.385s 00:16:15.452 ************************************ 00:16:15.452 END TEST blockdev_xnvme 00:16:15.452 ************************************ 00:16:15.452 22:56:53 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.452 22:56:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.452 22:56:53 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:15.452 22:56:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.452 22:56:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.452 22:56:53 -- common/autotest_common.sh@10 -- # set +x 00:16:15.452 ************************************ 00:16:15.452 START TEST ublk 00:16:15.452 ************************************ 00:16:15.452 22:56:53 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:15.452 * Looking for test storage... 00:16:15.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:15.452 22:56:53 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:15.452 22:56:53 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:15.452 22:56:53 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:15.452 22:56:53 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:15.452 22:56:53 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.452 22:56:53 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.452 22:56:53 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.452 22:56:53 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.452 22:56:53 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.452 22:56:53 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.452 22:56:53 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.452 22:56:53 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.452 22:56:53 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.452 22:56:53 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.452 22:56:53 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.452 22:56:53 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:15.452 22:56:53 ublk -- scripts/common.sh@345 -- # : 1 00:16:15.452 22:56:53 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.452 22:56:53 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.452 22:56:53 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:15.452 22:56:53 ublk -- scripts/common.sh@353 -- # local d=1 00:16:15.452 22:56:53 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.452 22:56:53 ublk -- scripts/common.sh@355 -- # echo 1 00:16:15.452 22:56:53 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.452 22:56:53 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:15.452 22:56:53 ublk -- scripts/common.sh@353 -- # local d=2 00:16:15.452 22:56:53 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.452 22:56:53 ublk -- scripts/common.sh@355 -- # echo 2 00:16:15.452 22:56:53 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.452 22:56:53 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.452 22:56:53 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.452 22:56:53 ublk -- scripts/common.sh@368 -- # return 0 00:16:15.452 22:56:53 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.452 22:56:53 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:15.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.452 --rc genhtml_branch_coverage=1 00:16:15.452 --rc genhtml_function_coverage=1 00:16:15.452 --rc genhtml_legend=1 00:16:15.452 --rc geninfo_all_blocks=1 00:16:15.452 --rc geninfo_unexecuted_blocks=1 00:16:15.452 00:16:15.452 ' 00:16:15.452 22:56:53 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:15.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.452 --rc genhtml_branch_coverage=1 00:16:15.452 --rc genhtml_function_coverage=1 00:16:15.452 --rc genhtml_legend=1 00:16:15.452 --rc geninfo_all_blocks=1 00:16:15.452 --rc geninfo_unexecuted_blocks=1 00:16:15.452 00:16:15.452 ' 00:16:15.452 22:56:53 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:15.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.452 --rc genhtml_branch_coverage=1 00:16:15.452 --rc genhtml_function_coverage=1 00:16:15.452 --rc genhtml_legend=1 00:16:15.452 --rc geninfo_all_blocks=1 00:16:15.452 --rc geninfo_unexecuted_blocks=1 00:16:15.452 00:16:15.452 ' 00:16:15.452 22:56:53 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:15.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.452 --rc genhtml_branch_coverage=1 00:16:15.452 --rc genhtml_function_coverage=1 00:16:15.452 --rc genhtml_legend=1 00:16:15.452 --rc geninfo_all_blocks=1 00:16:15.452 --rc geninfo_unexecuted_blocks=1 00:16:15.452 00:16:15.452 ' 00:16:15.452 22:56:53 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:15.452 22:56:53 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:15.452 22:56:53 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:15.452 22:56:53 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:15.452 22:56:53 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:15.452 22:56:53 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:15.452 22:56:53 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:15.452 22:56:53 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:15.452 22:56:53 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:15.452 22:56:53 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:15.453 22:56:53 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:15.453 22:56:53 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:15.453 22:56:53 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:15.453 22:56:53 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:15.453 22:56:53 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:15.453 22:56:53 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:15.453 22:56:53 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:15.453 22:56:53 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:15.453 22:56:53 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:15.453 22:56:53 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:15.453 22:56:53 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.453 22:56:53 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.453 22:56:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.453 ************************************ 00:16:15.453 START TEST test_save_ublk_config 00:16:15.453 ************************************ 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75273 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75273 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75273 ']' 00:16:15.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:15.453 22:56:53 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:15.453 [2024-12-13 22:56:53.612074] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:15.453 [2024-12-13 22:56:53.612189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75273 ] 00:16:15.453 [2024-12-13 22:56:53.770551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.453 [2024-12-13 22:56:53.864981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.453 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.453 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:15.453 22:56:54 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:15.453 22:56:54 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:15.453 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.453 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:15.453 [2024-12-13 22:56:54.470776] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:15.453 [2024-12-13 22:56:54.471536] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:15.453 malloc0 00:16:15.453 [2024-12-13 22:56:54.533878] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:15.453 [2024-12-13 22:56:54.533949] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:15.453 [2024-12-13 22:56:54.533959] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:15.453 [2024-12-13 22:56:54.533966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:15.453 [2024-12-13 22:56:54.541794] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:15.453 [2024-12-13 22:56:54.541814] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:15.453 [2024-12-13 22:56:54.549781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:15.453 [2024-12-13 22:56:54.549871] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:15.453 [2024-12-13 22:56:54.573786] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:15.453 0 00:16:15.453 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.453 22:56:54 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:15.453 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.453 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:16.025 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.025 22:56:54 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:16.025 "subsystems": [ 00:16:16.025 { 00:16:16.025 "subsystem": "fsdev", 00:16:16.025 "config": [ 00:16:16.025 { 00:16:16.025 "method": "fsdev_set_opts", 00:16:16.025 "params": { 00:16:16.025 "fsdev_io_pool_size": 65535, 00:16:16.025 "fsdev_io_cache_size": 256 00:16:16.025 } 00:16:16.025 } 00:16:16.025 ] 00:16:16.025 }, 00:16:16.025 { 00:16:16.025 "subsystem": "keyring", 00:16:16.025 "config": [] 00:16:16.025 }, 00:16:16.025 { 00:16:16.025 "subsystem": "iobuf", 00:16:16.025 "config": [ 00:16:16.025 { 00:16:16.025 "method": "iobuf_set_options", 00:16:16.025 "params": { 00:16:16.025 "small_pool_count": 8192, 00:16:16.025 "large_pool_count": 1024, 00:16:16.025 "small_bufsize": 8192, 00:16:16.025 "large_bufsize": 135168, 00:16:16.025 "enable_numa": false 00:16:16.025 } 00:16:16.025 } 00:16:16.025 ] 00:16:16.025 }, 00:16:16.025 { 00:16:16.025 "subsystem": "sock", 00:16:16.025 "config": [ 00:16:16.025 { 00:16:16.025 "method": "sock_set_default_impl", 00:16:16.025 "params": { 00:16:16.025 "impl_name": "posix" 00:16:16.025 } 00:16:16.025 }, 00:16:16.025 { 00:16:16.025 "method": "sock_impl_set_options", 00:16:16.025 "params": { 00:16:16.025 "impl_name": "ssl", 00:16:16.025 "recv_buf_size": 4096, 00:16:16.025 "send_buf_size": 4096, 00:16:16.025 "enable_recv_pipe": true, 00:16:16.025 "enable_quickack": false, 00:16:16.025 "enable_placement_id": 0, 00:16:16.025 "enable_zerocopy_send_server": true, 00:16:16.025 "enable_zerocopy_send_client": false, 00:16:16.025 "zerocopy_threshold": 0, 00:16:16.025 "tls_version": 0, 00:16:16.025 "enable_ktls": false 00:16:16.025 } 00:16:16.025 }, 00:16:16.025 { 00:16:16.025 "method": "sock_impl_set_options", 00:16:16.025 "params": { 00:16:16.025 "impl_name": "posix", 00:16:16.025 "recv_buf_size": 2097152, 00:16:16.025 "send_buf_size": 2097152, 00:16:16.025 "enable_recv_pipe": true, 00:16:16.025 "enable_quickack": false, 00:16:16.025 "enable_placement_id": 0, 00:16:16.025 "enable_zerocopy_send_server": true, 00:16:16.025 "enable_zerocopy_send_client": false, 00:16:16.025 "zerocopy_threshold": 0, 00:16:16.025 "tls_version": 0, 00:16:16.025 "enable_ktls": false 00:16:16.025 } 00:16:16.025 } 00:16:16.025 ] 00:16:16.025 }, 00:16:16.025 { 00:16:16.025 "subsystem": "vmd", 00:16:16.025 "config": [] 00:16:16.025 }, 00:16:16.025 { 00:16:16.025 "subsystem": "accel", 00:16:16.025 "config": [ 00:16:16.025 { 00:16:16.025 "method": "accel_set_options", 00:16:16.025 "params": { 00:16:16.025 "small_cache_size": 128, 00:16:16.025 "large_cache_size": 16, 00:16:16.025 "task_count": 2048, 00:16:16.025 "sequence_count": 2048, 00:16:16.025 "buf_count": 2048 00:16:16.025 } 00:16:16.025 } 00:16:16.025 ] 00:16:16.025 }, 00:16:16.025 { 00:16:16.025 "subsystem": "bdev", 00:16:16.025 "config": [ 00:16:16.025 { 00:16:16.025 "method": "bdev_set_options", 00:16:16.025 "params": { 00:16:16.025 "bdev_io_pool_size": 65535, 00:16:16.025 "bdev_io_cache_size": 256, 00:16:16.025 "bdev_auto_examine": true, 00:16:16.025 "iobuf_small_cache_size": 128, 00:16:16.025 "iobuf_large_cache_size": 16 00:16:16.025 } 00:16:16.025 }, 00:16:16.025 { 00:16:16.025 "method": "bdev_raid_set_options", 00:16:16.025 "params": { 00:16:16.025 "process_window_size_kb": 1024, 00:16:16.025 "process_max_bandwidth_mb_sec": 0 00:16:16.025 } 00:16:16.025 }, 00:16:16.025 { 00:16:16.025 "method": "bdev_iscsi_set_options", 00:16:16.025 "params": { 00:16:16.025 "timeout_sec": 30 00:16:16.025 } 00:16:16.025 }, 00:16:16.025 { 00:16:16.025 "method": "bdev_nvme_set_options", 00:16:16.025 "params": { 00:16:16.025 "action_on_timeout": "none", 00:16:16.025 "timeout_us": 0, 00:16:16.025 "timeout_admin_us": 0, 00:16:16.025 "keep_alive_timeout_ms": 10000, 00:16:16.025 "arbitration_burst": 0, 00:16:16.025 "low_priority_weight": 0, 00:16:16.025 "medium_priority_weight": 0, 00:16:16.025 "high_priority_weight": 0, 00:16:16.025 "nvme_adminq_poll_period_us": 10000, 00:16:16.025 "nvme_ioq_poll_period_us": 0, 00:16:16.025 "io_queue_requests": 0, 00:16:16.025 "delay_cmd_submit": true, 00:16:16.025 "transport_retry_count": 4, 00:16:16.025 "bdev_retry_count": 3, 00:16:16.025 "transport_ack_timeout": 0, 00:16:16.025 "ctrlr_loss_timeout_sec": 0, 00:16:16.025 "reconnect_delay_sec": 0, 00:16:16.025 "fast_io_fail_timeout_sec": 0, 00:16:16.025 "disable_auto_failback": false, 00:16:16.025 "generate_uuids": false, 00:16:16.025 "transport_tos": 0, 00:16:16.025 "nvme_error_stat": false, 00:16:16.025 "rdma_srq_size": 0, 00:16:16.025 "io_path_stat": false, 00:16:16.025 "allow_accel_sequence": false, 00:16:16.025 "rdma_max_cq_size": 0, 00:16:16.025 "rdma_cm_event_timeout_ms": 0, 00:16:16.025 "dhchap_digests": [ 00:16:16.025 "sha256", 00:16:16.025 "sha384", 00:16:16.025 "sha512" 00:16:16.025 ], 00:16:16.025 "dhchap_dhgroups": [ 00:16:16.025 "null", 00:16:16.025 "ffdhe2048", 00:16:16.025 "ffdhe3072", 00:16:16.025 "ffdhe4096", 00:16:16.025 "ffdhe6144", 00:16:16.025 "ffdhe8192" 00:16:16.025 ], 00:16:16.026 "rdma_umr_per_io": false 00:16:16.026 } 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "method": "bdev_nvme_set_hotplug", 00:16:16.026 "params": { 00:16:16.026 "period_us": 100000, 00:16:16.026 "enable": false 00:16:16.026 } 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "method": "bdev_malloc_create", 00:16:16.026 "params": { 00:16:16.026 "name": "malloc0", 00:16:16.026 "num_blocks": 8192, 00:16:16.026 "block_size": 4096, 00:16:16.026 "physical_block_size": 4096, 00:16:16.026 "uuid": "5ef0faf2-fe13-4861-9eb4-fcdca50d90ff", 00:16:16.026 "optimal_io_boundary": 0, 00:16:16.026 "md_size": 0, 00:16:16.026 "dif_type": 0, 00:16:16.026 "dif_is_head_of_md": false, 00:16:16.026 "dif_pi_format": 0 00:16:16.026 } 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "method": "bdev_wait_for_examine" 00:16:16.026 } 00:16:16.026 ] 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "subsystem": "scsi", 00:16:16.026 "config": null 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "subsystem": "scheduler", 00:16:16.026 "config": [ 00:16:16.026 { 00:16:16.026 "method": "framework_set_scheduler", 00:16:16.026 "params": { 00:16:16.026 "name": "static" 00:16:16.026 } 00:16:16.026 } 00:16:16.026 ] 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "subsystem": "vhost_scsi", 00:16:16.026 "config": [] 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "subsystem": "vhost_blk", 00:16:16.026 "config": [] 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "subsystem": "ublk", 00:16:16.026 "config": [ 00:16:16.026 { 00:16:16.026 "method": "ublk_create_target", 00:16:16.026 "params": { 00:16:16.026 "cpumask": "1" 00:16:16.026 } 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "method": "ublk_start_disk", 00:16:16.026 "params": { 00:16:16.026 "bdev_name": "malloc0", 00:16:16.026 "ublk_id": 0, 00:16:16.026 "num_queues": 1, 00:16:16.026 "queue_depth": 128 00:16:16.026 } 00:16:16.026 } 00:16:16.026 ] 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "subsystem": "nbd", 00:16:16.026 "config": [] 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "subsystem": "nvmf", 00:16:16.026 "config": [ 00:16:16.026 { 00:16:16.026 "method": "nvmf_set_config", 00:16:16.026 "params": { 00:16:16.026 "discovery_filter": "match_any", 00:16:16.026 "admin_cmd_passthru": { 00:16:16.026 "identify_ctrlr": false 00:16:16.026 }, 00:16:16.026 "dhchap_digests": [ 00:16:16.026 "sha256", 00:16:16.026 "sha384", 00:16:16.026 "sha512" 00:16:16.026 ], 00:16:16.026 "dhchap_dhgroups": [ 00:16:16.026 "null", 00:16:16.026 "ffdhe2048", 00:16:16.026 "ffdhe3072", 00:16:16.026 "ffdhe4096", 00:16:16.026 "ffdhe6144", 00:16:16.026 "ffdhe8192" 00:16:16.026 ] 00:16:16.026 } 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "method": "nvmf_set_max_subsystems", 00:16:16.026 "params": { 00:16:16.026 "max_subsystems": 1024 00:16:16.026 } 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "method": "nvmf_set_crdt", 00:16:16.026 "params": { 00:16:16.026 "crdt1": 0, 00:16:16.026 "crdt2": 0, 00:16:16.026 "crdt3": 0 00:16:16.026 } 00:16:16.026 } 00:16:16.026 ] 00:16:16.026 }, 00:16:16.026 { 00:16:16.026 "subsystem": "iscsi", 00:16:16.026 "config": [ 00:16:16.026 { 00:16:16.026 "method": "iscsi_set_options", 00:16:16.026 "params": { 00:16:16.026 "node_base": "iqn.2016-06.io.spdk", 00:16:16.026 "max_sessions": 128, 00:16:16.026 "max_connections_per_session": 2, 00:16:16.026 "max_queue_depth": 64, 00:16:16.026 "default_time2wait": 2, 00:16:16.026 "default_time2retain": 20, 00:16:16.026 "first_burst_length": 8192, 00:16:16.026 "immediate_data": true, 00:16:16.026 "allow_duplicated_isid": false, 00:16:16.026 "error_recovery_level": 0, 00:16:16.026 "nop_timeout": 60, 00:16:16.026 "nop_in_interval": 30, 00:16:16.026 "disable_chap": false, 00:16:16.026 "require_chap": false, 00:16:16.026 "mutual_chap": false, 00:16:16.026 "chap_group": 0, 00:16:16.026 "max_large_datain_per_connection": 64, 00:16:16.026 "max_r2t_per_connection": 4, 00:16:16.026 "pdu_pool_size": 36864, 00:16:16.026 "immediate_data_pool_size": 16384, 00:16:16.026 "data_out_pool_size": 2048 00:16:16.026 } 00:16:16.026 } 00:16:16.026 ] 00:16:16.026 } 00:16:16.026 ] 00:16:16.026 }' 00:16:16.026 22:56:54 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75273 00:16:16.026 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75273 ']' 00:16:16.026 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75273 00:16:16.026 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:16.026 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.026 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75273 00:16:16.026 killing process with pid 75273 00:16:16.026 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.026 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.026 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75273' 00:16:16.026 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75273 00:16:16.026 22:56:54 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75273 00:16:16.967 [2024-12-13 22:56:55.914187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:16.967 [2024-12-13 22:56:55.957806] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:16.967 [2024-12-13 22:56:55.957916] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:16.967 [2024-12-13 22:56:55.965787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:16.967 [2024-12-13 22:56:55.965836] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:16.967 [2024-12-13 22:56:55.965848] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:16.967 [2024-12-13 22:56:55.965893] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:16.967 [2024-12-13 22:56:55.966031] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:18.345 22:56:57 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75329 00:16:18.345 22:56:57 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75329 00:16:18.345 22:56:57 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75329 ']' 00:16:18.345 22:56:57 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.345 22:56:57 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.345 22:56:57 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.345 22:56:57 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.345 22:56:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.345 22:56:57 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:18.345 "subsystems": [ 00:16:18.345 { 00:16:18.345 "subsystem": "fsdev", 00:16:18.345 "config": [ 00:16:18.345 { 00:16:18.345 "method": "fsdev_set_opts", 00:16:18.345 "params": { 00:16:18.345 "fsdev_io_pool_size": 65535, 00:16:18.345 "fsdev_io_cache_size": 256 00:16:18.345 } 00:16:18.345 } 00:16:18.345 ] 00:16:18.345 }, 00:16:18.345 { 00:16:18.345 "subsystem": "keyring", 00:16:18.345 "config": [] 00:16:18.345 }, 00:16:18.345 { 00:16:18.345 "subsystem": "iobuf", 00:16:18.345 "config": [ 00:16:18.345 { 00:16:18.345 "method": "iobuf_set_options", 00:16:18.345 "params": { 00:16:18.345 "small_pool_count": 8192, 00:16:18.345 "large_pool_count": 1024, 00:16:18.345 "small_bufsize": 8192, 00:16:18.345 "large_bufsize": 135168, 00:16:18.345 "enable_numa": false 00:16:18.345 } 00:16:18.345 } 00:16:18.345 ] 00:16:18.345 }, 00:16:18.345 { 00:16:18.345 "subsystem": "sock", 00:16:18.345 "config": [ 00:16:18.345 { 00:16:18.345 "method": "sock_set_default_impl", 00:16:18.345 "params": { 00:16:18.345 "impl_name": "posix" 00:16:18.345 } 00:16:18.345 }, 00:16:18.345 { 00:16:18.345 "method": "sock_impl_set_options", 00:16:18.345 "params": { 00:16:18.345 "impl_name": "ssl", 00:16:18.345 "recv_buf_size": 4096, 00:16:18.345 "send_buf_size": 4096, 00:16:18.345 "enable_recv_pipe": true, 00:16:18.345 "enable_quickack": false, 00:16:18.345 "enable_placement_id": 0, 00:16:18.345 "enable_zerocopy_send_server": true, 00:16:18.345 "enable_zerocopy_send_client": false, 00:16:18.345 "zerocopy_threshold": 0, 00:16:18.345 "tls_version": 0, 00:16:18.345 "enable_ktls": false 00:16:18.345 } 00:16:18.345 }, 00:16:18.345 { 00:16:18.345 "method": "sock_impl_set_options", 00:16:18.345 "params": { 00:16:18.345 "impl_name": "posix", 00:16:18.345 "recv_buf_size": 2097152, 00:16:18.345 "send_buf_size": 2097152, 00:16:18.345 "enable_recv_pipe": true, 00:16:18.345 "enable_quickack": false, 00:16:18.345 "enable_placement_id": 0, 00:16:18.345 "enable_zerocopy_send_server": true, 00:16:18.345 "enable_zerocopy_send_client": false, 00:16:18.345 "zerocopy_threshold": 0, 00:16:18.345 "tls_version": 0, 00:16:18.345 "enable_ktls": false 00:16:18.345 } 00:16:18.345 } 00:16:18.345 ] 00:16:18.345 }, 00:16:18.345 { 00:16:18.345 "subsystem": "vmd", 00:16:18.345 "config": [] 00:16:18.345 }, 00:16:18.345 { 00:16:18.345 "subsystem": "accel", 00:16:18.345 "config": [ 00:16:18.345 { 00:16:18.345 "method": "accel_set_options", 00:16:18.345 "params": { 00:16:18.345 "small_cache_size": 128, 00:16:18.345 "large_cache_size": 16, 00:16:18.345 "task_count": 2048, 00:16:18.346 "sequence_count": 2048, 00:16:18.346 "buf_count": 2048 00:16:18.346 } 00:16:18.346 } 00:16:18.346 ] 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "subsystem": "bdev", 00:16:18.346 "config": [ 00:16:18.346 { 00:16:18.346 "method": "bdev_set_options", 00:16:18.346 "params": { 00:16:18.346 "bdev_io_pool_size": 65535, 00:16:18.346 "bdev_io_cache_size": 256, 00:16:18.346 "bdev_auto_examine": true, 00:16:18.346 "iobuf_small_cache_size": 128, 00:16:18.346 "iobuf_large_cache_size": 16 00:16:18.346 } 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "method": "bdev_raid_set_options", 00:16:18.346 "params": { 00:16:18.346 "process_window_size_kb": 1024, 00:16:18.346 "process_max_bandwidth_mb_sec": 0 00:16:18.346 } 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "method": "bdev_iscsi_set_options", 00:16:18.346 "params": { 00:16:18.346 "timeout_sec": 30 00:16:18.346 } 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "method": "bdev_nvme_set_options", 00:16:18.346 "params": { 00:16:18.346 "action_on_timeout": "none", 00:16:18.346 "timeout_us": 0, 00:16:18.346 "timeout_admin_us": 0, 00:16:18.346 "keep_alive_timeout_ms": 10000, 00:16:18.346 "arbitration_burst": 0, 00:16:18.346 "low_priority_weight": 0, 00:16:18.346 "medium_priority_weight": 0, 00:16:18.346 "high_priority_weight": 0, 00:16:18.346 "nvme_adminq_poll_period_us": 10000, 00:16:18.346 "nvme_ioq_poll_period_us": 0, 00:16:18.346 "io_queue_requests": 0, 00:16:18.346 "delay_cmd_submit": true, 00:16:18.346 "transport_retry_count": 4, 00:16:18.346 "bdev_retry_count": 3, 00:16:18.346 "transport_ack_timeout": 0, 00:16:18.346 "ctrlr_loss_timeout_sec": 0, 00:16:18.346 "reconnect_delay_sec": 0, 00:16:18.346 "fast_io_fail_timeout_sec": 0, 00:16:18.346 "disable_auto_failback": false, 00:16:18.346 "generate_uuids": false, 00:16:18.346 "transport_tos": 0, 00:16:18.346 "nvme_error_stat": false, 00:16:18.346 "rdma_srq_size": 0, 00:16:18.346 "io_path_stat": false, 00:16:18.346 "allow_accel_sequence": false, 00:16:18.346 "rdma_max_cq_size": 0, 00:16:18.346 "rdma_cm_event_timeout_ms": 0, 00:16:18.346 "dhchap_digests": [ 00:16:18.346 "sha256", 00:16:18.346 "sha384", 00:16:18.346 "sha512" 00:16:18.346 ], 00:16:18.346 "dhchap_dhgroups": [ 00:16:18.346 "null", 00:16:18.346 "ffdhe2048", 00:16:18.346 "ffdhe3072", 00:16:18.346 "ffdhe4096", 00:16:18.346 "ffdhe6144", 00:16:18.346 "ffdhe8192" 00:16:18.346 ], 00:16:18.346 "rdma_umr_per_io": false 00:16:18.346 } 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "method": "bdev_nvme_set_hotplug", 00:16:18.346 "params": { 00:16:18.346 "period_us": 100000, 00:16:18.346 "enable": false 00:16:18.346 } 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "method": "bdev_malloc_create", 00:16:18.346 "params": { 00:16:18.346 "name": "malloc0", 00:16:18.346 "num_blocks": 8192, 00:16:18.346 "block_size": 4096, 00:16:18.346 "physical_block_size": 4096, 00:16:18.346 "uuid": "5ef0faf2-fe13-4861-9eb4-fcdca50d90ff", 00:16:18.346 "optimal_io_boundary": 0, 00:16:18.346 "md_size": 0, 00:16:18.346 "dif_type": 0, 00:16:18.346 "dif_is_head_of_md": false, 00:16:18.346 "dif_pi_format": 0 00:16:18.346 } 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "method": "bdev_wait_for_examine" 00:16:18.346 } 00:16:18.346 ] 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "subsystem": "scsi", 00:16:18.346 "config": null 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "subsystem": "scheduler", 00:16:18.346 "config": [ 00:16:18.346 { 00:16:18.346 "method": "framework_set_scheduler", 00:16:18.346 "params": { 00:16:18.346 "name": "static" 00:16:18.346 } 00:16:18.346 } 00:16:18.346 ] 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "subsystem": "vhost_scsi", 00:16:18.346 "config": [] 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "subsystem": "vhost_blk", 00:16:18.346 "config": [] 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "subsystem": "ublk", 00:16:18.346 "config": [ 00:16:18.346 { 00:16:18.346 "method": "ublk_create_target", 00:16:18.346 "params": { 00:16:18.346 "cpumask": "1" 00:16:18.346 } 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "method": "ublk_start_disk", 00:16:18.346 "params": { 00:16:18.346 "bdev_name": "malloc0", 00:16:18.346 "ublk_id": 0, 00:16:18.346 "num_queues": 1, 00:16:18.346 "queue_depth": 128 00:16:18.346 } 00:16:18.346 } 00:16:18.346 ] 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "subsystem": "nbd", 00:16:18.346 "config": [] 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "subsystem": "nvmf", 00:16:18.346 "config": [ 00:16:18.346 { 00:16:18.346 "method": "nvmf_set_config", 00:16:18.346 "params": { 00:16:18.346 "discovery_filter": "match_any", 00:16:18.346 "admin_cmd_passthru": { 00:16:18.346 "identify_ctrlr": false 00:16:18.346 }, 00:16:18.346 "dhchap_digests": [ 00:16:18.346 "sha256", 00:16:18.346 "sha384", 00:16:18.346 "sha512" 00:16:18.346 ], 00:16:18.346 "dhchap_dhgroups": [ 00:16:18.346 "null", 00:16:18.346 "ffdhe2048", 00:16:18.346 "ffdhe3072", 00:16:18.346 "ffdhe4096", 00:16:18.346 "ffdhe6144", 00:16:18.346 "ffdhe8192" 00:16:18.346 ] 00:16:18.346 } 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "method": "nvmf_set_max_subsystems", 00:16:18.346 "params": { 00:16:18.346 "max_subsystems": 1024 00:16:18.346 } 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "method": "nvmf_set_crdt", 00:16:18.346 "params": { 00:16:18.346 "crdt1": 0, 00:16:18.346 "crdt2": 0, 00:16:18.346 "crdt3": 0 00:16:18.346 } 00:16:18.346 } 00:16:18.346 ] 00:16:18.346 }, 00:16:18.346 { 00:16:18.346 "subsystem": "iscsi", 00:16:18.346 "config": [ 00:16:18.346 { 00:16:18.346 "method": "iscsi_set_options", 00:16:18.346 "params": { 00:16:18.346 "node_base": "iqn.2016-06.io.spdk", 00:16:18.346 "max_sessions": 128, 00:16:18.346 "max_connections_per_session": 2, 00:16:18.346 "max_queue_depth": 64, 00:16:18.346 "default_time2wait": 2, 00:16:18.346 "default_time2retain": 20, 00:16:18.346 "first_burst_length": 8192, 00:16:18.346 "immediate_data": true, 00:16:18.346 "allow_duplicated_isid": false, 00:16:18.346 "error_recovery_level": 0, 00:16:18.346 "nop_timeout": 60, 00:16:18.346 "nop_in_interval": 30, 00:16:18.346 "disable_chap": false, 00:16:18.346 "require_chap": false, 00:16:18.346 "mutual_chap": false, 00:16:18.346 "chap_group": 0, 00:16:18.346 "max_large_datain_per_connection": 64, 00:16:18.346 "max_r2t_per_connection": 4, 00:16:18.346 "pdu_pool_size": 36864, 00:16:18.346 "immediate_data_pool_size": 16384, 00:16:18.346 "data_out_pool_size": 2048 00:16:18.346 } 00:16:18.346 } 00:16:18.346 ] 00:16:18.346 } 00:16:18.346 ] 00:16:18.346 }' 00:16:18.346 22:56:57 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:18.346 [2024-12-13 22:56:57.393620] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:18.346 [2024-12-13 22:56:57.393737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75329 ] 00:16:18.605 [2024-12-13 22:56:57.548889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.605 [2024-12-13 22:56:57.623992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.172 [2024-12-13 22:56:58.260771] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:19.172 [2024-12-13 22:56:58.261412] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:19.172 [2024-12-13 22:56:58.268852] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:19.172 [2024-12-13 22:56:58.268908] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:19.172 [2024-12-13 22:56:58.268915] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:19.172 [2024-12-13 22:56:58.268920] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:19.172 [2024-12-13 22:56:58.277823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:19.172 [2024-12-13 22:56:58.277840] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:19.172 [2024-12-13 22:56:58.284778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:19.172 [2024-12-13 22:56:58.284844] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:19.172 [2024-12-13 22:56:58.301774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75329 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75329 ']' 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75329 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75329 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.431 killing process with pid 75329 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75329' 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75329 00:16:19.431 22:56:58 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75329 00:16:20.366 [2024-12-13 22:56:59.460796] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:20.366 [2024-12-13 22:56:59.498837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:20.366 [2024-12-13 22:56:59.498931] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:20.625 [2024-12-13 22:56:59.505781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:20.625 [2024-12-13 22:56:59.505818] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:20.625 [2024-12-13 22:56:59.505825] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:20.625 [2024-12-13 22:56:59.505844] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:20.625 [2024-12-13 22:56:59.505950] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:21.559 22:57:00 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:21.559 00:16:21.559 real 0m7.138s 00:16:21.559 user 0m4.758s 00:16:21.559 sys 0m2.983s 00:16:21.559 22:57:00 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.559 22:57:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.559 ************************************ 00:16:21.559 END TEST test_save_ublk_config 00:16:21.559 ************************************ 00:16:21.818 22:57:00 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75402 00:16:21.818 22:57:00 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:21.818 22:57:00 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75402 00:16:21.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.818 22:57:00 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:21.818 22:57:00 ublk -- common/autotest_common.sh@835 -- # '[' -z 75402 ']' 00:16:21.818 22:57:00 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.818 22:57:00 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.818 22:57:00 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.818 22:57:00 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.818 22:57:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:21.818 [2024-12-13 22:57:00.787828] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:21.818 [2024-12-13 22:57:00.787943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75402 ] 00:16:21.818 [2024-12-13 22:57:00.942474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:22.076 [2024-12-13 22:57:01.020050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.076 [2024-12-13 22:57:01.020142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.643 22:57:01 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.643 22:57:01 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:22.643 22:57:01 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:22.643 22:57:01 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:22.643 22:57:01 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.643 22:57:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:22.643 ************************************ 00:16:22.643 START TEST test_create_ublk 00:16:22.643 ************************************ 00:16:22.643 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:22.643 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:22.643 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.643 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:22.643 [2024-12-13 22:57:01.632778] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:22.643 [2024-12-13 22:57:01.634254] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:22.643 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.643 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:22.643 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:22.643 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.643 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:22.901 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:22.901 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.901 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:22.901 [2024-12-13 22:57:01.788877] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:22.901 [2024-12-13 22:57:01.789163] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:22.901 [2024-12-13 22:57:01.789173] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:22.901 [2024-12-13 22:57:01.789179] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:22.901 [2024-12-13 22:57:01.797926] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:22.901 [2024-12-13 22:57:01.797943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:22.901 [2024-12-13 22:57:01.804780] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:22.901 [2024-12-13 22:57:01.805258] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:22.901 [2024-12-13 22:57:01.818792] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:22.901 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:22.901 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.901 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:22.901 22:57:01 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:22.901 { 00:16:22.901 "ublk_device": "/dev/ublkb0", 00:16:22.901 "id": 0, 00:16:22.901 "queue_depth": 512, 00:16:22.901 "num_queues": 4, 00:16:22.901 "bdev_name": "Malloc0" 00:16:22.901 } 00:16:22.901 ]' 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:22.901 22:57:01 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:22.901 22:57:02 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:22.901 22:57:02 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:22.901 22:57:02 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:22.901 22:57:02 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:22.901 22:57:02 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:22.901 22:57:02 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:22.901 22:57:02 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:22.901 22:57:02 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:22.901 22:57:02 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:22.901 22:57:02 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:22.901 22:57:02 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:22.901 22:57:02 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:22.901 22:57:02 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:23.159 fio: verification read phase will never start because write phase uses all of runtime 00:16:23.159 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:23.159 fio-3.35 00:16:23.159 Starting 1 process 00:16:33.127 00:16:33.127 fio_test: (groupid=0, jobs=1): err= 0: pid=75448: Fri Dec 13 22:57:12 2024 00:16:33.127 write: IOPS=18.7k, BW=73.1MiB/s (76.7MB/s)(731MiB/10001msec); 0 zone resets 00:16:33.127 clat (usec): min=33, max=11915, avg=52.64, stdev=147.43 00:16:33.127 lat (usec): min=33, max=11922, avg=53.10, stdev=147.45 00:16:33.127 clat percentiles (usec): 00:16:33.127 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 42], 00:16:33.127 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:16:33.127 | 70.00th=[ 47], 80.00th=[ 49], 90.00th=[ 53], 95.00th=[ 59], 00:16:33.127 | 99.00th=[ 69], 99.50th=[ 90], 99.90th=[ 3359], 99.95th=[ 3556], 00:16:33.127 | 99.99th=[ 3884] 00:16:33.127 bw ( KiB/s): min=33524, max=84904, per=99.88%, avg=74764.84, stdev=18211.43, samples=19 00:16:33.127 iops : min= 8381, max=21226, avg=18691.21, stdev=4552.86, samples=19 00:16:33.127 lat (usec) : 50=85.68%, 100=13.86%, 250=0.17%, 500=0.02%, 750=0.02% 00:16:33.127 lat (usec) : 1000=0.02% 00:16:33.128 lat (msec) : 2=0.05%, 4=0.19%, 10=0.01%, 20=0.01% 00:16:33.128 cpu : usr=3.15%, sys=14.45%, ctx=187207, majf=0, minf=799 00:16:33.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.128 issued rwts: total=0,187160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.128 00:16:33.128 Run status group 0 (all jobs): 00:16:33.128 WRITE: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=731MiB (767MB), run=10001-10001msec 00:16:33.128 00:16:33.128 Disk stats (read/write): 00:16:33.128 ublkb0: ios=0/185232, merge=0/0, ticks=0/8188, in_queue=8188, util=98.75% 00:16:33.128 22:57:12 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:33.128 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.128 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.128 [2024-12-13 22:57:12.243612] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:33.394 [2024-12-13 22:57:12.281226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:33.394 [2024-12-13 22:57:12.282190] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:33.394 [2024-12-13 22:57:12.292804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:33.394 [2024-12-13 22:57:12.293450] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:33.394 [2024-12-13 22:57:12.293464] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.394 22:57:12 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.394 [2024-12-13 22:57:12.306831] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:33.394 request: 00:16:33.394 { 00:16:33.394 "ublk_id": 0, 00:16:33.394 "method": "ublk_stop_disk", 00:16:33.394 "req_id": 1 00:16:33.394 } 00:16:33.394 Got JSON-RPC error response 00:16:33.394 response: 00:16:33.394 { 00:16:33.394 "code": -19, 00:16:33.394 "message": "No such device" 00:16:33.394 } 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.394 22:57:12 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.394 [2024-12-13 22:57:12.322830] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:33.394 [2024-12-13 22:57:12.326522] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:33.394 [2024-12-13 22:57:12.326554] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.394 22:57:12 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.394 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.695 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.695 22:57:12 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:33.695 22:57:12 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:33.695 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.695 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.695 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.695 22:57:12 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:33.695 22:57:12 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:33.695 22:57:12 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:33.695 22:57:12 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:33.695 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.695 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.695 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.695 22:57:12 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:33.695 22:57:12 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:33.695 22:57:12 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:33.695 00:16:33.695 real 0m11.147s 00:16:33.695 user 0m0.612s 00:16:33.695 sys 0m1.527s 00:16:33.695 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.695 ************************************ 00:16:33.695 END TEST test_create_ublk 00:16:33.695 ************************************ 00:16:33.695 22:57:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.695 22:57:12 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:33.695 22:57:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:33.695 22:57:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.695 22:57:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.695 ************************************ 00:16:33.695 START TEST test_create_multi_ublk 00:16:33.695 ************************************ 00:16:33.695 22:57:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:33.695 22:57:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:33.695 22:57:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.695 22:57:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.954 [2024-12-13 22:57:12.820766] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:33.954 [2024-12-13 22:57:12.822272] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:33.954 22:57:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.954 22:57:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:33.954 22:57:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:33.954 22:57:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:33.954 22:57:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:33.954 22:57:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.954 22:57:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.954 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.954 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:33.954 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:33.954 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.954 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.954 [2024-12-13 22:57:13.023881] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:33.954 [2024-12-13 22:57:13.024185] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:33.954 [2024-12-13 22:57:13.024192] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:33.954 [2024-12-13 22:57:13.024200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:33.954 [2024-12-13 22:57:13.035819] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:33.954 [2024-12-13 22:57:13.035839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:33.954 [2024-12-13 22:57:13.047773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:33.954 [2024-12-13 22:57:13.048260] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:33.954 [2024-12-13 22:57:13.084777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.213 [2024-12-13 22:57:13.308874] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:34.213 [2024-12-13 22:57:13.309162] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:34.213 [2024-12-13 22:57:13.309174] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:34.213 [2024-12-13 22:57:13.309179] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:34.213 [2024-12-13 22:57:13.316798] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:34.213 [2024-12-13 22:57:13.316815] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:34.213 [2024-12-13 22:57:13.324791] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:34.213 [2024-12-13 22:57:13.325278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:34.213 [2024-12-13 22:57:13.333807] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.213 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.472 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.472 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:34.472 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:34.472 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.472 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.472 [2024-12-13 22:57:13.495860] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:34.472 [2024-12-13 22:57:13.496150] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:34.472 [2024-12-13 22:57:13.496157] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:34.472 [2024-12-13 22:57:13.496163] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:34.472 [2024-12-13 22:57:13.509775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:34.472 [2024-12-13 22:57:13.509795] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:34.472 [2024-12-13 22:57:13.517774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:34.472 [2024-12-13 22:57:13.518267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:34.472 [2024-12-13 22:57:13.524478] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:34.472 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.472 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:34.472 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:34.472 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:34.472 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.472 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.731 [2024-12-13 22:57:13.679875] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:34.731 [2024-12-13 22:57:13.680161] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:34.731 [2024-12-13 22:57:13.680171] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:34.731 [2024-12-13 22:57:13.680176] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:34.731 [2024-12-13 22:57:13.691791] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:34.731 [2024-12-13 22:57:13.691807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:34.731 [2024-12-13 22:57:13.699783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:34.731 [2024-12-13 22:57:13.700266] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:34.731 [2024-12-13 22:57:13.705476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:34.731 { 00:16:34.731 "ublk_device": "/dev/ublkb0", 00:16:34.731 "id": 0, 00:16:34.731 "queue_depth": 512, 00:16:34.731 "num_queues": 4, 00:16:34.731 "bdev_name": "Malloc0" 00:16:34.731 }, 00:16:34.731 { 00:16:34.731 "ublk_device": "/dev/ublkb1", 00:16:34.731 "id": 1, 00:16:34.731 "queue_depth": 512, 00:16:34.731 "num_queues": 4, 00:16:34.731 "bdev_name": "Malloc1" 00:16:34.731 }, 00:16:34.731 { 00:16:34.731 "ublk_device": "/dev/ublkb2", 00:16:34.731 "id": 2, 00:16:34.731 "queue_depth": 512, 00:16:34.731 "num_queues": 4, 00:16:34.731 "bdev_name": "Malloc2" 00:16:34.731 }, 00:16:34.731 { 00:16:34.731 "ublk_device": "/dev/ublkb3", 00:16:34.731 "id": 3, 00:16:34.731 "queue_depth": 512, 00:16:34.731 "num_queues": 4, 00:16:34.731 "bdev_name": "Malloc3" 00:16:34.731 } 00:16:34.731 ]' 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:34.731 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:34.990 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:34.990 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:34.990 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:34.990 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:34.990 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:34.990 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:34.990 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:34.990 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:34.990 22:57:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:34.990 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:34.990 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:34.990 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:34.990 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:34.990 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:34.990 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:34.990 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:34.990 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:34.990 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.248 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.248 [2024-12-13 22:57:14.343845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:35.248 [2024-12-13 22:57:14.385130] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:35.248 [2024-12-13 22:57:14.386328] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:35.507 [2024-12-13 22:57:14.388984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:35.507 [2024-12-13 22:57:14.389233] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:35.507 [2024-12-13 22:57:14.389243] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.507 [2024-12-13 22:57:14.406848] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:35.507 [2024-12-13 22:57:14.454773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:35.507 [2024-12-13 22:57:14.455454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:35.507 [2024-12-13 22:57:14.459204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:35.507 [2024-12-13 22:57:14.459443] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:35.507 [2024-12-13 22:57:14.459456] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.507 [2024-12-13 22:57:14.467847] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:35.507 [2024-12-13 22:57:14.507812] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:35.507 [2024-12-13 22:57:14.508422] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:35.507 [2024-12-13 22:57:14.512460] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:35.507 [2024-12-13 22:57:14.512669] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:35.507 [2024-12-13 22:57:14.512681] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.507 [2024-12-13 22:57:14.522845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:35.507 [2024-12-13 22:57:14.563799] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:35.507 [2024-12-13 22:57:14.564380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:35.507 [2024-12-13 22:57:14.570774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:35.507 [2024-12-13 22:57:14.571002] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:35.507 [2024-12-13 22:57:14.571014] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.507 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:35.766 [2024-12-13 22:57:14.754819] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:35.766 [2024-12-13 22:57:14.758456] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:35.766 [2024-12-13 22:57:14.758485] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:35.766 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:35.766 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:35.766 22:57:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:35.766 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.766 22:57:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.025 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.025 22:57:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.025 22:57:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:36.025 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.025 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.592 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.592 22:57:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.592 22:57:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:36.592 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.592 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.592 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.592 22:57:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:36.592 22:57:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:36.592 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.592 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:36.851 00:16:36.851 real 0m3.143s 00:16:36.851 user 0m0.795s 00:16:36.851 sys 0m0.136s 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.851 22:57:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.851 ************************************ 00:16:36.851 END TEST test_create_multi_ublk 00:16:36.851 ************************************ 00:16:36.851 22:57:15 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:36.851 22:57:15 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:36.851 22:57:15 ublk -- ublk/ublk.sh@130 -- # killprocess 75402 00:16:36.851 22:57:15 ublk -- common/autotest_common.sh@954 -- # '[' -z 75402 ']' 00:16:36.851 22:57:15 ublk -- common/autotest_common.sh@958 -- # kill -0 75402 00:16:36.851 22:57:15 ublk -- common/autotest_common.sh@959 -- # uname 00:16:36.851 22:57:15 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.851 22:57:15 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75402 00:16:37.109 killing process with pid 75402 00:16:37.109 22:57:15 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.109 22:57:15 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.109 22:57:15 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75402' 00:16:37.109 22:57:15 ublk -- common/autotest_common.sh@973 -- # kill 75402 00:16:37.109 22:57:15 ublk -- common/autotest_common.sh@978 -- # wait 75402 00:16:37.676 [2024-12-13 22:57:16.533041] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:37.676 [2024-12-13 22:57:16.533233] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:38.248 00:16:38.248 real 0m23.828s 00:16:38.248 user 0m33.812s 00:16:38.248 sys 0m9.879s 00:16:38.248 22:57:17 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.248 ************************************ 00:16:38.248 22:57:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.248 END TEST ublk 00:16:38.248 ************************************ 00:16:38.248 22:57:17 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:38.248 22:57:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:38.248 22:57:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.248 22:57:17 -- common/autotest_common.sh@10 -- # set +x 00:16:38.248 ************************************ 00:16:38.248 START TEST ublk_recovery 00:16:38.248 ************************************ 00:16:38.248 22:57:17 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:38.248 * Looking for test storage... 00:16:38.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:38.248 22:57:17 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:38.248 22:57:17 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:38.248 22:57:17 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:38.248 22:57:17 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.248 22:57:17 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:38.248 22:57:17 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.248 22:57:17 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:38.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.249 --rc genhtml_branch_coverage=1 00:16:38.249 --rc genhtml_function_coverage=1 00:16:38.249 --rc genhtml_legend=1 00:16:38.249 --rc geninfo_all_blocks=1 00:16:38.249 --rc geninfo_unexecuted_blocks=1 00:16:38.249 00:16:38.249 ' 00:16:38.249 22:57:17 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:38.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.249 --rc genhtml_branch_coverage=1 00:16:38.249 --rc genhtml_function_coverage=1 00:16:38.249 --rc genhtml_legend=1 00:16:38.249 --rc geninfo_all_blocks=1 00:16:38.249 --rc geninfo_unexecuted_blocks=1 00:16:38.249 00:16:38.249 ' 00:16:38.249 22:57:17 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:38.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.249 --rc genhtml_branch_coverage=1 00:16:38.249 --rc genhtml_function_coverage=1 00:16:38.249 --rc genhtml_legend=1 00:16:38.249 --rc geninfo_all_blocks=1 00:16:38.249 --rc geninfo_unexecuted_blocks=1 00:16:38.249 00:16:38.249 ' 00:16:38.249 22:57:17 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:38.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.249 --rc genhtml_branch_coverage=1 00:16:38.249 --rc genhtml_function_coverage=1 00:16:38.249 --rc genhtml_legend=1 00:16:38.249 --rc geninfo_all_blocks=1 00:16:38.249 --rc geninfo_unexecuted_blocks=1 00:16:38.249 00:16:38.249 ' 00:16:38.249 22:57:17 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:38.249 22:57:17 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:38.249 22:57:17 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:38.249 22:57:17 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:38.249 22:57:17 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:38.249 22:57:17 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:38.249 22:57:17 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:38.249 22:57:17 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:38.249 22:57:17 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:38.249 22:57:17 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:38.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.249 22:57:17 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75797 00:16:38.249 22:57:17 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:38.249 22:57:17 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75797 00:16:38.249 22:57:17 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75797 ']' 00:16:38.249 22:57:17 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.249 22:57:17 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.249 22:57:17 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.249 22:57:17 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.249 22:57:17 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:38.249 22:57:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.511 [2024-12-13 22:57:17.464269] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:38.511 [2024-12-13 22:57:17.464636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75797 ] 00:16:38.511 [2024-12-13 22:57:17.626033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:38.770 [2024-12-13 22:57:17.711602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.770 [2024-12-13 22:57:17.711627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.336 22:57:18 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.336 22:57:18 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:39.336 22:57:18 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:39.336 22:57:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.336 22:57:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.336 [2024-12-13 22:57:18.298776] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:39.336 [2024-12-13 22:57:18.300344] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:39.336 22:57:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.336 22:57:18 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:39.336 22:57:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.336 22:57:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.336 malloc0 00:16:39.336 22:57:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.336 22:57:18 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:39.336 22:57:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.336 22:57:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.336 [2024-12-13 22:57:18.378874] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:39.336 [2024-12-13 22:57:18.378948] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:39.336 [2024-12-13 22:57:18.378956] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:39.336 [2024-12-13 22:57:18.378962] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.336 [2024-12-13 22:57:18.386885] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.336 [2024-12-13 22:57:18.386900] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.336 [2024-12-13 22:57:18.394776] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.336 [2024-12-13 22:57:18.394881] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:39.336 [2024-12-13 22:57:18.416785] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.336 1 00:16:39.336 22:57:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.336 22:57:18 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:40.711 22:57:19 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75832 00:16:40.711 22:57:19 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:40.711 22:57:19 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:40.711 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:40.711 fio-3.35 00:16:40.711 Starting 1 process 00:16:45.977 22:57:24 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75797 00:16:45.977 22:57:24 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:51.267 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75797 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:51.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.267 22:57:29 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75937 00:16:51.267 22:57:29 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.267 22:57:29 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75937 00:16:51.267 22:57:29 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75937 ']' 00:16:51.267 22:57:29 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.267 22:57:29 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.267 22:57:29 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.267 22:57:29 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.267 22:57:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.267 22:57:29 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:51.267 [2024-12-13 22:57:29.520190] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:51.267 [2024-12-13 22:57:29.520574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75937 ] 00:16:51.267 [2024-12-13 22:57:29.676837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:51.267 [2024-12-13 22:57:29.788412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.267 [2024-12-13 22:57:29.788492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.267 22:57:30 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.267 22:57:30 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:51.267 22:57:30 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:51.267 22:57:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.267 22:57:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.267 [2024-12-13 22:57:30.385778] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:51.267 [2024-12-13 22:57:30.387589] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:51.267 22:57:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.267 22:57:30 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:51.267 22:57:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.267 22:57:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.526 malloc0 00:16:51.526 22:57:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.526 22:57:30 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:51.526 22:57:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.526 22:57:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.526 [2024-12-13 22:57:30.489890] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:51.526 [2024-12-13 22:57:30.489927] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:51.526 [2024-12-13 22:57:30.489937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:51.526 [2024-12-13 22:57:30.497808] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:51.526 [2024-12-13 22:57:30.497831] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:51.526 [2024-12-13 22:57:30.497839] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:51.526 [2024-12-13 22:57:30.497920] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:51.526 1 00:16:51.526 22:57:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.526 22:57:30 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75832 00:16:51.526 [2024-12-13 22:57:30.505781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:51.526 [2024-12-13 22:57:30.512156] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:51.526 [2024-12-13 22:57:30.519960] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:51.526 [2024-12-13 22:57:30.519981] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:47.745 00:17:47.745 fio_test: (groupid=0, jobs=1): err= 0: pid=75836: Fri Dec 13 22:58:19 2024 00:17:47.745 read: IOPS=28.1k, BW=110MiB/s (115MB/s)(6582MiB/60001msec) 00:17:47.745 slat (nsec): min=1091, max=320943, avg=4742.25, stdev=1388.57 00:17:47.745 clat (usec): min=918, max=6096.8k, avg=2248.55, stdev=38431.52 00:17:47.745 lat (usec): min=958, max=6096.8k, avg=2253.29, stdev=38431.52 00:17:47.745 clat percentiles (usec): 00:17:47.745 | 1.00th=[ 1663], 5.00th=[ 1762], 10.00th=[ 1795], 20.00th=[ 1811], 00:17:47.745 | 30.00th=[ 1827], 40.00th=[ 1844], 50.00th=[ 1860], 60.00th=[ 1876], 00:17:47.745 | 70.00th=[ 1893], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2933], 00:17:47.745 | 99.00th=[ 5014], 99.50th=[ 5669], 99.90th=[ 7046], 99.95th=[ 8225], 00:17:47.745 | 99.99th=[12780] 00:17:47.745 bw ( KiB/s): min=11128, max=131800, per=100.00%, avg=123768.52, stdev=17565.58, samples=108 00:17:47.745 iops : min= 2782, max=32950, avg=30942.13, stdev=4391.40, samples=108 00:17:47.745 write: IOPS=28.1k, BW=110MiB/s (115MB/s)(6577MiB/60001msec); 0 zone resets 00:17:47.745 slat (nsec): min=1120, max=819559, avg=4771.26, stdev=1577.76 00:17:47.745 clat (usec): min=1085, max=6097.0k, avg=2300.71, stdev=36684.81 00:17:47.745 lat (usec): min=1091, max=6097.0k, avg=2305.48, stdev=36684.87 00:17:47.745 clat percentiles (usec): 00:17:47.745 | 1.00th=[ 1696], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1893], 00:17:47.745 | 30.00th=[ 1926], 40.00th=[ 1926], 50.00th=[ 1942], 60.00th=[ 1958], 00:17:47.745 | 70.00th=[ 1991], 80.00th=[ 2024], 90.00th=[ 2147], 95.00th=[ 2835], 00:17:47.745 | 99.00th=[ 5014], 99.50th=[ 5735], 99.90th=[ 7046], 99.95th=[ 8225], 00:17:47.745 | 99.99th=[12911] 00:17:47.745 bw ( KiB/s): min=10432, max=131048, per=100.00%, avg=123669.41, stdev=17653.39, samples=108 00:17:47.745 iops : min= 2608, max=32762, avg=30917.35, stdev=4413.35, samples=108 00:17:47.745 lat (usec) : 1000=0.01% 00:17:47.745 lat (msec) : 2=81.26%, 4=16.07%, 10=2.65%, 20=0.02%, >=2000=0.01% 00:17:47.745 cpu : usr=6.01%, sys=27.19%, ctx=112592, majf=0, minf=13 00:17:47.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:47.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:47.745 issued rwts: total=1685004,1683677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:47.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:47.745 00:17:47.745 Run status group 0 (all jobs): 00:17:47.745 READ: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=6582MiB (6902MB), run=60001-60001msec 00:17:47.745 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=6577MiB (6896MB), run=60001-60001msec 00:17:47.745 00:17:47.745 Disk stats (read/write): 00:17:47.745 ublkb1: ios=1681775/1680371, merge=0/0, ticks=3703413/3654768, in_queue=7358182, util=99.89% 00:17:47.745 22:58:19 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.745 [2024-12-13 22:58:19.672580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:47.745 [2024-12-13 22:58:19.709883] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:47.745 [2024-12-13 22:58:19.710023] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:47.745 [2024-12-13 22:58:19.717794] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:47.745 [2024-12-13 22:58:19.717947] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:47.745 [2024-12-13 22:58:19.717974] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.745 22:58:19 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.745 [2024-12-13 22:58:19.733863] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:47.745 [2024-12-13 22:58:19.741778] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:47.745 [2024-12-13 22:58:19.741806] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.745 22:58:19 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:47.745 22:58:19 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:47.745 22:58:19 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75937 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75937 ']' 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75937 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75937 00:17:47.745 killing process with pid 75937 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75937' 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75937 00:17:47.745 22:58:19 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75937 00:17:47.745 [2024-12-13 22:58:20.797889] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:47.745 [2024-12-13 22:58:20.798092] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:47.745 ************************************ 00:17:47.745 END TEST ublk_recovery 00:17:47.745 ************************************ 00:17:47.745 00:17:47.745 real 1m4.263s 00:17:47.745 user 1m46.347s 00:17:47.745 sys 0m31.322s 00:17:47.745 22:58:21 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.745 22:58:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.745 22:58:21 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:47.745 22:58:21 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:47.745 22:58:21 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:47.745 22:58:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.745 22:58:21 -- common/autotest_common.sh@10 -- # set +x 00:17:47.745 22:58:21 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:47.745 22:58:21 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:47.745 22:58:21 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:47.745 22:58:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:47.745 22:58:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:47.745 22:58:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:47.745 22:58:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:47.745 22:58:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:47.745 22:58:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:47.745 22:58:21 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:47.746 22:58:21 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.746 22:58:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:47.746 22:58:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.746 22:58:21 -- common/autotest_common.sh@10 -- # set +x 00:17:47.746 ************************************ 00:17:47.746 START TEST ftl 00:17:47.746 ************************************ 00:17:47.746 22:58:21 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.746 * Looking for test storage... 00:17:47.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.746 22:58:21 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:47.746 22:58:21 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:47.746 22:58:21 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:47.746 22:58:21 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:47.746 22:58:21 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.746 22:58:21 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.746 22:58:21 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.746 22:58:21 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.746 22:58:21 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.746 22:58:21 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.746 22:58:21 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.746 22:58:21 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.746 22:58:21 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.746 22:58:21 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.746 22:58:21 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.746 22:58:21 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:47.746 22:58:21 ftl -- scripts/common.sh@345 -- # : 1 00:17:47.746 22:58:21 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.746 22:58:21 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.746 22:58:21 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:47.746 22:58:21 ftl -- scripts/common.sh@353 -- # local d=1 00:17:47.746 22:58:21 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.746 22:58:21 ftl -- scripts/common.sh@355 -- # echo 1 00:17:47.746 22:58:21 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.746 22:58:21 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:47.746 22:58:21 ftl -- scripts/common.sh@353 -- # local d=2 00:17:47.746 22:58:21 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.746 22:58:21 ftl -- scripts/common.sh@355 -- # echo 2 00:17:47.746 22:58:21 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.746 22:58:21 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.746 22:58:21 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.746 22:58:21 ftl -- scripts/common.sh@368 -- # return 0 00:17:47.746 22:58:21 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.746 22:58:21 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:47.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.746 --rc genhtml_branch_coverage=1 00:17:47.746 --rc genhtml_function_coverage=1 00:17:47.746 --rc genhtml_legend=1 00:17:47.746 --rc geninfo_all_blocks=1 00:17:47.746 --rc geninfo_unexecuted_blocks=1 00:17:47.746 00:17:47.746 ' 00:17:47.746 22:58:21 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:47.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.746 --rc genhtml_branch_coverage=1 00:17:47.746 --rc genhtml_function_coverage=1 00:17:47.746 --rc genhtml_legend=1 00:17:47.746 --rc geninfo_all_blocks=1 00:17:47.746 --rc geninfo_unexecuted_blocks=1 00:17:47.746 00:17:47.746 ' 00:17:47.746 22:58:21 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:47.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.746 --rc genhtml_branch_coverage=1 00:17:47.746 --rc genhtml_function_coverage=1 00:17:47.746 --rc genhtml_legend=1 00:17:47.746 --rc geninfo_all_blocks=1 00:17:47.746 --rc geninfo_unexecuted_blocks=1 00:17:47.746 00:17:47.746 ' 00:17:47.746 22:58:21 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:47.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.746 --rc genhtml_branch_coverage=1 00:17:47.746 --rc genhtml_function_coverage=1 00:17:47.746 --rc genhtml_legend=1 00:17:47.746 --rc geninfo_all_blocks=1 00:17:47.746 --rc geninfo_unexecuted_blocks=1 00:17:47.746 00:17:47.746 ' 00:17:47.746 22:58:21 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:47.746 22:58:21 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.746 22:58:21 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.746 22:58:21 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.746 22:58:21 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:47.746 22:58:21 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:47.746 22:58:21 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.746 22:58:21 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:47.746 22:58:21 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:47.746 22:58:21 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.746 22:58:21 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.746 22:58:21 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:47.746 22:58:21 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:47.746 22:58:21 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.746 22:58:21 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.746 22:58:21 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:47.746 22:58:21 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:47.746 22:58:21 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.746 22:58:21 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.746 22:58:21 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:47.746 22:58:21 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:47.746 22:58:21 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.746 22:58:21 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.746 22:58:21 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.746 22:58:21 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.746 22:58:21 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:47.746 22:58:21 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:47.746 22:58:21 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.746 22:58:21 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.746 22:58:21 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.746 22:58:21 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:47.746 22:58:21 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:47.746 22:58:21 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:47.746 22:58:21 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:47.746 22:58:21 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:47.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:47.746 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.746 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.746 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.746 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.746 22:58:22 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76738 00:17:47.746 22:58:22 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76738 00:17:47.746 22:58:22 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:47.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.746 22:58:22 ftl -- common/autotest_common.sh@835 -- # '[' -z 76738 ']' 00:17:47.746 22:58:22 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.746 22:58:22 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.746 22:58:22 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.746 22:58:22 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.746 22:58:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:47.746 [2024-12-13 22:58:22.324924] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:47.746 [2024-12-13 22:58:22.325312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76738 ] 00:17:47.746 [2024-12-13 22:58:22.489501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.746 [2024-12-13 22:58:22.609927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.746 22:58:23 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.746 22:58:23 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:47.746 22:58:23 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:47.746 22:58:23 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@50 -- # break 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:47.746 22:58:24 ftl -- ftl/ftl.sh@63 -- # break 00:17:47.747 22:58:24 ftl -- ftl/ftl.sh@66 -- # killprocess 76738 00:17:47.747 22:58:24 ftl -- common/autotest_common.sh@954 -- # '[' -z 76738 ']' 00:17:47.747 22:58:24 ftl -- common/autotest_common.sh@958 -- # kill -0 76738 00:17:47.747 22:58:24 ftl -- common/autotest_common.sh@959 -- # uname 00:17:47.747 22:58:24 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.747 22:58:24 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76738 00:17:47.747 killing process with pid 76738 00:17:47.747 22:58:24 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.747 22:58:24 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.747 22:58:24 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76738' 00:17:47.747 22:58:24 ftl -- common/autotest_common.sh@973 -- # kill 76738 00:17:47.747 22:58:24 ftl -- common/autotest_common.sh@978 -- # wait 76738 00:17:47.747 22:58:26 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:47.747 22:58:26 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:47.747 22:58:26 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:47.747 22:58:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.747 22:58:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:47.747 ************************************ 00:17:47.747 START TEST ftl_fio_basic 00:17:47.747 ************************************ 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:47.747 * Looking for test storage... 00:17:47.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:47.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.747 --rc genhtml_branch_coverage=1 00:17:47.747 --rc genhtml_function_coverage=1 00:17:47.747 --rc genhtml_legend=1 00:17:47.747 --rc geninfo_all_blocks=1 00:17:47.747 --rc geninfo_unexecuted_blocks=1 00:17:47.747 00:17:47.747 ' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:47.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.747 --rc genhtml_branch_coverage=1 00:17:47.747 --rc genhtml_function_coverage=1 00:17:47.747 --rc genhtml_legend=1 00:17:47.747 --rc geninfo_all_blocks=1 00:17:47.747 --rc geninfo_unexecuted_blocks=1 00:17:47.747 00:17:47.747 ' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:47.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.747 --rc genhtml_branch_coverage=1 00:17:47.747 --rc genhtml_function_coverage=1 00:17:47.747 --rc genhtml_legend=1 00:17:47.747 --rc geninfo_all_blocks=1 00:17:47.747 --rc geninfo_unexecuted_blocks=1 00:17:47.747 00:17:47.747 ' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:47.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.747 --rc genhtml_branch_coverage=1 00:17:47.747 --rc genhtml_function_coverage=1 00:17:47.747 --rc genhtml_legend=1 00:17:47.747 --rc geninfo_all_blocks=1 00:17:47.747 --rc geninfo_unexecuted_blocks=1 00:17:47.747 00:17:47.747 ' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:47.747 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76870 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76870 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76870 ']' 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.748 22:58:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:47.748 [2024-12-13 22:58:26.388955] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:47.748 [2024-12-13 22:58:26.389243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76870 ] 00:17:47.748 [2024-12-13 22:58:26.545984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:47.748 [2024-12-13 22:58:26.625213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.748 [2024-12-13 22:58:26.625518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.748 [2024-12-13 22:58:26.625546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.315 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.315 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:48.315 22:58:27 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:48.315 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:48.315 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:48.315 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:48.315 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:48.315 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:48.573 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:48.573 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:48.573 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:48.573 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:48.573 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:48.573 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:48.573 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:48.573 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:48.573 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:48.573 { 00:17:48.573 "name": "nvme0n1", 00:17:48.573 "aliases": [ 00:17:48.573 "287d4c85-0028-4633-b4ee-363211bf29cf" 00:17:48.573 ], 00:17:48.573 "product_name": "NVMe disk", 00:17:48.573 "block_size": 4096, 00:17:48.573 "num_blocks": 1310720, 00:17:48.573 "uuid": "287d4c85-0028-4633-b4ee-363211bf29cf", 00:17:48.573 "numa_id": -1, 00:17:48.573 "assigned_rate_limits": { 00:17:48.573 "rw_ios_per_sec": 0, 00:17:48.573 "rw_mbytes_per_sec": 0, 00:17:48.573 "r_mbytes_per_sec": 0, 00:17:48.573 "w_mbytes_per_sec": 0 00:17:48.573 }, 00:17:48.573 "claimed": false, 00:17:48.573 "zoned": false, 00:17:48.573 "supported_io_types": { 00:17:48.573 "read": true, 00:17:48.573 "write": true, 00:17:48.573 "unmap": true, 00:17:48.573 "flush": true, 00:17:48.573 "reset": true, 00:17:48.573 "nvme_admin": true, 00:17:48.573 "nvme_io": true, 00:17:48.573 "nvme_io_md": false, 00:17:48.573 "write_zeroes": true, 00:17:48.573 "zcopy": false, 00:17:48.573 "get_zone_info": false, 00:17:48.573 "zone_management": false, 00:17:48.573 "zone_append": false, 00:17:48.573 "compare": true, 00:17:48.573 "compare_and_write": false, 00:17:48.573 "abort": true, 00:17:48.573 "seek_hole": false, 00:17:48.573 "seek_data": false, 00:17:48.573 "copy": true, 00:17:48.573 "nvme_iov_md": false 00:17:48.573 }, 00:17:48.573 "driver_specific": { 00:17:48.573 "nvme": [ 00:17:48.573 { 00:17:48.573 "pci_address": "0000:00:11.0", 00:17:48.573 "trid": { 00:17:48.573 "trtype": "PCIe", 00:17:48.573 "traddr": "0000:00:11.0" 00:17:48.573 }, 00:17:48.573 "ctrlr_data": { 00:17:48.573 "cntlid": 0, 00:17:48.573 "vendor_id": "0x1b36", 00:17:48.573 "model_number": "QEMU NVMe Ctrl", 00:17:48.573 "serial_number": "12341", 00:17:48.573 "firmware_revision": "8.0.0", 00:17:48.573 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:48.573 "oacs": { 00:17:48.573 "security": 0, 00:17:48.573 "format": 1, 00:17:48.573 "firmware": 0, 00:17:48.573 "ns_manage": 1 00:17:48.573 }, 00:17:48.573 "multi_ctrlr": false, 00:17:48.573 "ana_reporting": false 00:17:48.573 }, 00:17:48.573 "vs": { 00:17:48.573 "nvme_version": "1.4" 00:17:48.573 }, 00:17:48.573 "ns_data": { 00:17:48.573 "id": 1, 00:17:48.573 "can_share": false 00:17:48.573 } 00:17:48.573 } 00:17:48.573 ], 00:17:48.573 "mp_policy": "active_passive" 00:17:48.573 } 00:17:48.573 } 00:17:48.573 ]' 00:17:48.573 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:48.832 22:58:27 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:49.090 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=44b66f9a-fb67-4bc4-b93c-69094ea4c564 00:17:49.090 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 44b66f9a-fb67-4bc4-b93c-69094ea4c564 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:49.348 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:49.605 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:49.605 { 00:17:49.605 "name": "6a3cf9bd-9be2-47db-841b-46c6b282e7c1", 00:17:49.605 "aliases": [ 00:17:49.605 "lvs/nvme0n1p0" 00:17:49.605 ], 00:17:49.605 "product_name": "Logical Volume", 00:17:49.605 "block_size": 4096, 00:17:49.605 "num_blocks": 26476544, 00:17:49.605 "uuid": "6a3cf9bd-9be2-47db-841b-46c6b282e7c1", 00:17:49.605 "assigned_rate_limits": { 00:17:49.605 "rw_ios_per_sec": 0, 00:17:49.605 "rw_mbytes_per_sec": 0, 00:17:49.605 "r_mbytes_per_sec": 0, 00:17:49.605 "w_mbytes_per_sec": 0 00:17:49.605 }, 00:17:49.605 "claimed": false, 00:17:49.605 "zoned": false, 00:17:49.605 "supported_io_types": { 00:17:49.605 "read": true, 00:17:49.605 "write": true, 00:17:49.605 "unmap": true, 00:17:49.605 "flush": false, 00:17:49.605 "reset": true, 00:17:49.605 "nvme_admin": false, 00:17:49.605 "nvme_io": false, 00:17:49.605 "nvme_io_md": false, 00:17:49.605 "write_zeroes": true, 00:17:49.605 "zcopy": false, 00:17:49.605 "get_zone_info": false, 00:17:49.605 "zone_management": false, 00:17:49.605 "zone_append": false, 00:17:49.605 "compare": false, 00:17:49.605 "compare_and_write": false, 00:17:49.605 "abort": false, 00:17:49.605 "seek_hole": true, 00:17:49.605 "seek_data": true, 00:17:49.605 "copy": false, 00:17:49.605 "nvme_iov_md": false 00:17:49.605 }, 00:17:49.605 "driver_specific": { 00:17:49.605 "lvol": { 00:17:49.605 "lvol_store_uuid": "44b66f9a-fb67-4bc4-b93c-69094ea4c564", 00:17:49.605 "base_bdev": "nvme0n1", 00:17:49.605 "thin_provision": true, 00:17:49.605 "num_allocated_clusters": 0, 00:17:49.605 "snapshot": false, 00:17:49.605 "clone": false, 00:17:49.605 "esnap_clone": false 00:17:49.605 } 00:17:49.605 } 00:17:49.605 } 00:17:49.605 ]' 00:17:49.605 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:49.606 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:49.606 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:49.606 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:49.606 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:49.606 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:49.606 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:49.606 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:49.606 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:49.862 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:49.862 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:49.862 22:58:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:49.862 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:49.863 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:49.863 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:49.863 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:49.863 22:58:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:50.120 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:50.120 { 00:17:50.120 "name": "6a3cf9bd-9be2-47db-841b-46c6b282e7c1", 00:17:50.120 "aliases": [ 00:17:50.121 "lvs/nvme0n1p0" 00:17:50.121 ], 00:17:50.121 "product_name": "Logical Volume", 00:17:50.121 "block_size": 4096, 00:17:50.121 "num_blocks": 26476544, 00:17:50.121 "uuid": "6a3cf9bd-9be2-47db-841b-46c6b282e7c1", 00:17:50.121 "assigned_rate_limits": { 00:17:50.121 "rw_ios_per_sec": 0, 00:17:50.121 "rw_mbytes_per_sec": 0, 00:17:50.121 "r_mbytes_per_sec": 0, 00:17:50.121 "w_mbytes_per_sec": 0 00:17:50.121 }, 00:17:50.121 "claimed": false, 00:17:50.121 "zoned": false, 00:17:50.121 "supported_io_types": { 00:17:50.121 "read": true, 00:17:50.121 "write": true, 00:17:50.121 "unmap": true, 00:17:50.121 "flush": false, 00:17:50.121 "reset": true, 00:17:50.121 "nvme_admin": false, 00:17:50.121 "nvme_io": false, 00:17:50.121 "nvme_io_md": false, 00:17:50.121 "write_zeroes": true, 00:17:50.121 "zcopy": false, 00:17:50.121 "get_zone_info": false, 00:17:50.121 "zone_management": false, 00:17:50.121 "zone_append": false, 00:17:50.121 "compare": false, 00:17:50.121 "compare_and_write": false, 00:17:50.121 "abort": false, 00:17:50.121 "seek_hole": true, 00:17:50.121 "seek_data": true, 00:17:50.121 "copy": false, 00:17:50.121 "nvme_iov_md": false 00:17:50.121 }, 00:17:50.121 "driver_specific": { 00:17:50.121 "lvol": { 00:17:50.121 "lvol_store_uuid": "44b66f9a-fb67-4bc4-b93c-69094ea4c564", 00:17:50.121 "base_bdev": "nvme0n1", 00:17:50.121 "thin_provision": true, 00:17:50.121 "num_allocated_clusters": 0, 00:17:50.121 "snapshot": false, 00:17:50.121 "clone": false, 00:17:50.121 "esnap_clone": false 00:17:50.121 } 00:17:50.121 } 00:17:50.121 } 00:17:50.121 ]' 00:17:50.121 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:50.121 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:50.121 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:50.121 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:50.121 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:50.121 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:50.121 22:58:29 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:50.121 22:58:29 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:50.379 22:58:29 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:50.379 22:58:29 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:50.379 22:58:29 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:50.379 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:50.379 22:58:29 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:50.379 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:50.379 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:50.379 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:50.379 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:50.379 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a3cf9bd-9be2-47db-841b-46c6b282e7c1 00:17:50.379 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:50.379 { 00:17:50.379 "name": "6a3cf9bd-9be2-47db-841b-46c6b282e7c1", 00:17:50.379 "aliases": [ 00:17:50.379 "lvs/nvme0n1p0" 00:17:50.379 ], 00:17:50.379 "product_name": "Logical Volume", 00:17:50.379 "block_size": 4096, 00:17:50.379 "num_blocks": 26476544, 00:17:50.379 "uuid": "6a3cf9bd-9be2-47db-841b-46c6b282e7c1", 00:17:50.379 "assigned_rate_limits": { 00:17:50.379 "rw_ios_per_sec": 0, 00:17:50.379 "rw_mbytes_per_sec": 0, 00:17:50.379 "r_mbytes_per_sec": 0, 00:17:50.379 "w_mbytes_per_sec": 0 00:17:50.379 }, 00:17:50.379 "claimed": false, 00:17:50.379 "zoned": false, 00:17:50.379 "supported_io_types": { 00:17:50.379 "read": true, 00:17:50.379 "write": true, 00:17:50.379 "unmap": true, 00:17:50.379 "flush": false, 00:17:50.379 "reset": true, 00:17:50.379 "nvme_admin": false, 00:17:50.379 "nvme_io": false, 00:17:50.379 "nvme_io_md": false, 00:17:50.379 "write_zeroes": true, 00:17:50.379 "zcopy": false, 00:17:50.379 "get_zone_info": false, 00:17:50.379 "zone_management": false, 00:17:50.379 "zone_append": false, 00:17:50.379 "compare": false, 00:17:50.379 "compare_and_write": false, 00:17:50.379 "abort": false, 00:17:50.379 "seek_hole": true, 00:17:50.379 "seek_data": true, 00:17:50.379 "copy": false, 00:17:50.379 "nvme_iov_md": false 00:17:50.379 }, 00:17:50.379 "driver_specific": { 00:17:50.379 "lvol": { 00:17:50.379 "lvol_store_uuid": "44b66f9a-fb67-4bc4-b93c-69094ea4c564", 00:17:50.379 "base_bdev": "nvme0n1", 00:17:50.379 "thin_provision": true, 00:17:50.379 "num_allocated_clusters": 0, 00:17:50.379 "snapshot": false, 00:17:50.379 "clone": false, 00:17:50.379 "esnap_clone": false 00:17:50.379 } 00:17:50.379 } 00:17:50.379 } 00:17:50.379 ]' 00:17:50.379 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:50.638 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:50.638 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:50.638 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:50.638 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:50.638 22:58:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:50.638 22:58:29 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:50.638 22:58:29 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:50.638 22:58:29 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6a3cf9bd-9be2-47db-841b-46c6b282e7c1 -c nvc0n1p0 --l2p_dram_limit 60 00:17:50.638 [2024-12-13 22:58:29.761115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.638 [2024-12-13 22:58:29.761158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:50.638 [2024-12-13 22:58:29.761171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:50.638 [2024-12-13 22:58:29.761178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.638 [2024-12-13 22:58:29.761224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.639 [2024-12-13 22:58:29.761233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.639 [2024-12-13 22:58:29.761241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:50.639 [2024-12-13 22:58:29.761247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.639 [2024-12-13 22:58:29.761275] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:50.639 [2024-12-13 22:58:29.761853] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:50.639 [2024-12-13 22:58:29.761870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.639 [2024-12-13 22:58:29.761876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.639 [2024-12-13 22:58:29.761884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:17:50.639 [2024-12-13 22:58:29.761890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.639 [2024-12-13 22:58:29.761925] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 36df95e5-07d6-432d-bbe7-25bdcbb6495c 00:17:50.639 [2024-12-13 22:58:29.762894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.639 [2024-12-13 22:58:29.763017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:50.639 [2024-12-13 22:58:29.763029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:50.639 [2024-12-13 22:58:29.763037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.639 [2024-12-13 22:58:29.767707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.639 [2024-12-13 22:58:29.767741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.639 [2024-12-13 22:58:29.767748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.584 ms 00:17:50.639 [2024-12-13 22:58:29.767763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.639 [2024-12-13 22:58:29.767843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.639 [2024-12-13 22:58:29.767852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.639 [2024-12-13 22:58:29.767859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:50.639 [2024-12-13 22:58:29.767868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.639 [2024-12-13 22:58:29.767918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.639 [2024-12-13 22:58:29.767927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:50.639 [2024-12-13 22:58:29.767934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:50.639 [2024-12-13 22:58:29.767940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.639 [2024-12-13 22:58:29.767962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:50.639 [2024-12-13 22:58:29.770742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.639 [2024-12-13 22:58:29.770849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.639 [2024-12-13 22:58:29.770865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:17:50.639 [2024-12-13 22:58:29.770873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.639 [2024-12-13 22:58:29.770910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.639 [2024-12-13 22:58:29.770916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:50.639 [2024-12-13 22:58:29.770923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:50.639 [2024-12-13 22:58:29.770929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.639 [2024-12-13 22:58:29.770947] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:50.639 [2024-12-13 22:58:29.771065] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:50.639 [2024-12-13 22:58:29.771077] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:50.639 [2024-12-13 22:58:29.771085] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:50.639 [2024-12-13 22:58:29.771093] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:50.639 [2024-12-13 22:58:29.771100] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:50.639 [2024-12-13 22:58:29.771108] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:50.639 [2024-12-13 22:58:29.771114] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:50.639 [2024-12-13 22:58:29.771121] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:50.639 [2024-12-13 22:58:29.771127] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:50.639 [2024-12-13 22:58:29.771134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.639 [2024-12-13 22:58:29.771141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:50.639 [2024-12-13 22:58:29.771149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:17:50.639 [2024-12-13 22:58:29.771154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.639 [2024-12-13 22:58:29.771226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.639 [2024-12-13 22:58:29.771232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:50.639 [2024-12-13 22:58:29.771240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:50.639 [2024-12-13 22:58:29.771246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.639 [2024-12-13 22:58:29.771339] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:50.639 [2024-12-13 22:58:29.771346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:50.639 [2024-12-13 22:58:29.771355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.639 [2024-12-13 22:58:29.771360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:50.639 [2024-12-13 22:58:29.771372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:50.639 [2024-12-13 22:58:29.771384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:50.639 [2024-12-13 22:58:29.771392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.639 [2024-12-13 22:58:29.771404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:50.639 [2024-12-13 22:58:29.771409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:50.639 [2024-12-13 22:58:29.771418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.639 [2024-12-13 22:58:29.771423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:50.639 [2024-12-13 22:58:29.771430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:50.639 [2024-12-13 22:58:29.771435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:50.639 [2024-12-13 22:58:29.771448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:50.639 [2024-12-13 22:58:29.771454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:50.639 [2024-12-13 22:58:29.771466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.639 [2024-12-13 22:58:29.771476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:50.639 [2024-12-13 22:58:29.771481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.639 [2024-12-13 22:58:29.771492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:50.639 [2024-12-13 22:58:29.771498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.639 [2024-12-13 22:58:29.771509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:50.639 [2024-12-13 22:58:29.771514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.639 [2024-12-13 22:58:29.771525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:50.639 [2024-12-13 22:58:29.771533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.639 [2024-12-13 22:58:29.771554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:50.639 [2024-12-13 22:58:29.771560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:50.639 [2024-12-13 22:58:29.771566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.639 [2024-12-13 22:58:29.771571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:50.639 [2024-12-13 22:58:29.771577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:50.639 [2024-12-13 22:58:29.771582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:50.639 [2024-12-13 22:58:29.771593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:50.639 [2024-12-13 22:58:29.771600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771604] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:50.639 [2024-12-13 22:58:29.771613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:50.639 [2024-12-13 22:58:29.771618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.639 [2024-12-13 22:58:29.771625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.639 [2024-12-13 22:58:29.771632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:50.639 [2024-12-13 22:58:29.771640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:50.639 [2024-12-13 22:58:29.771645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:50.639 [2024-12-13 22:58:29.771652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:50.639 [2024-12-13 22:58:29.771657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:50.639 [2024-12-13 22:58:29.771663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:50.639 [2024-12-13 22:58:29.771669] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:50.640 [2024-12-13 22:58:29.771678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.640 [2024-12-13 22:58:29.771684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:50.640 [2024-12-13 22:58:29.771691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:50.640 [2024-12-13 22:58:29.771697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:50.640 [2024-12-13 22:58:29.771703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:50.640 [2024-12-13 22:58:29.771709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:50.640 [2024-12-13 22:58:29.771716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:50.640 [2024-12-13 22:58:29.771721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:50.640 [2024-12-13 22:58:29.771728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:50.640 [2024-12-13 22:58:29.771751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:50.640 [2024-12-13 22:58:29.771768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:50.640 [2024-12-13 22:58:29.771774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:50.640 [2024-12-13 22:58:29.771781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:50.640 [2024-12-13 22:58:29.771786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:50.640 [2024-12-13 22:58:29.771793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:50.640 [2024-12-13 22:58:29.771798] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:50.640 [2024-12-13 22:58:29.771806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.640 [2024-12-13 22:58:29.771814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:50.640 [2024-12-13 22:58:29.771821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:50.640 [2024-12-13 22:58:29.771826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:50.640 [2024-12-13 22:58:29.771834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:50.640 [2024-12-13 22:58:29.771840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.640 [2024-12-13 22:58:29.771848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:50.640 [2024-12-13 22:58:29.771854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:17:50.640 [2024-12-13 22:58:29.771861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.640 [2024-12-13 22:58:29.771919] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:50.640 [2024-12-13 22:58:29.771930] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:53.924 [2024-12-13 22:58:32.405577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.405642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:53.924 [2024-12-13 22:58:32.405657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2633.646 ms 00:17:53.924 [2024-12-13 22:58:32.405667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.430838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.430881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:53.924 [2024-12-13 22:58:32.430893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.940 ms 00:17:53.924 [2024-12-13 22:58:32.430903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.431026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.431038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:53.924 [2024-12-13 22:58:32.431047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:53.924 [2024-12-13 22:58:32.431058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.482973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.483016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:53.924 [2024-12-13 22:58:32.483032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.874 ms 00:17:53.924 [2024-12-13 22:58:32.483042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.483088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.483099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:53.924 [2024-12-13 22:58:32.483108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:53.924 [2024-12-13 22:58:32.483116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.483482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.483500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:53.924 [2024-12-13 22:58:32.483508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:17:53.924 [2024-12-13 22:58:32.483519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.483642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.483653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:53.924 [2024-12-13 22:58:32.483661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:53.924 [2024-12-13 22:58:32.483671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.498046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.498076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:53.924 [2024-12-13 22:58:32.498086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.349 ms 00:17:53.924 [2024-12-13 22:58:32.498095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.509410] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:53.924 [2024-12-13 22:58:32.523620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.523650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:53.924 [2024-12-13 22:58:32.523662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.439 ms 00:17:53.924 [2024-12-13 22:58:32.523672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.570042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.570077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:53.924 [2024-12-13 22:58:32.570094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.330 ms 00:17:53.924 [2024-12-13 22:58:32.570103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.570284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.570295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:53.924 [2024-12-13 22:58:32.570307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:17:53.924 [2024-12-13 22:58:32.570314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.592990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.593024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:53.924 [2024-12-13 22:58:32.593036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.624 ms 00:17:53.924 [2024-12-13 22:58:32.593044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.615110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.615137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:53.924 [2024-12-13 22:58:32.615149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.022 ms 00:17:53.924 [2024-12-13 22:58:32.615157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.615722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.615750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:53.924 [2024-12-13 22:58:32.615772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:17:53.924 [2024-12-13 22:58:32.615779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.678542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.678690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:53.924 [2024-12-13 22:58:32.678713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.722 ms 00:17:53.924 [2024-12-13 22:58:32.678725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.702330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.702361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:53.924 [2024-12-13 22:58:32.702375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.511 ms 00:17:53.924 [2024-12-13 22:58:32.702383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.725127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.725252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:53.924 [2024-12-13 22:58:32.725270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.709 ms 00:17:53.924 [2024-12-13 22:58:32.725278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.748089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.748203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:53.924 [2024-12-13 22:58:32.748222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.776 ms 00:17:53.924 [2024-12-13 22:58:32.748231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.748263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.748271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:53.924 [2024-12-13 22:58:32.748286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:53.924 [2024-12-13 22:58:32.748293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.748375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.924 [2024-12-13 22:58:32.748385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:53.924 [2024-12-13 22:58:32.748395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:53.924 [2024-12-13 22:58:32.748402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.924 [2024-12-13 22:58:32.749302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2987.764 ms, result 0 00:17:53.924 { 00:17:53.924 "name": "ftl0", 00:17:53.924 "uuid": "36df95e5-07d6-432d-bbe7-25bdcbb6495c" 00:17:53.924 } 00:17:53.924 22:58:32 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:53.924 22:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:53.924 22:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:53.924 22:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:53.924 22:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:53.924 22:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:53.924 22:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:53.924 22:58:32 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:54.183 [ 00:17:54.183 { 00:17:54.183 "name": "ftl0", 00:17:54.183 "aliases": [ 00:17:54.183 "36df95e5-07d6-432d-bbe7-25bdcbb6495c" 00:17:54.183 ], 00:17:54.183 "product_name": "FTL disk", 00:17:54.183 "block_size": 4096, 00:17:54.183 "num_blocks": 20971520, 00:17:54.183 "uuid": "36df95e5-07d6-432d-bbe7-25bdcbb6495c", 00:17:54.183 "assigned_rate_limits": { 00:17:54.183 "rw_ios_per_sec": 0, 00:17:54.183 "rw_mbytes_per_sec": 0, 00:17:54.183 "r_mbytes_per_sec": 0, 00:17:54.183 "w_mbytes_per_sec": 0 00:17:54.183 }, 00:17:54.183 "claimed": false, 00:17:54.183 "zoned": false, 00:17:54.183 "supported_io_types": { 00:17:54.183 "read": true, 00:17:54.183 "write": true, 00:17:54.183 "unmap": true, 00:17:54.183 "flush": true, 00:17:54.183 "reset": false, 00:17:54.183 "nvme_admin": false, 00:17:54.183 "nvme_io": false, 00:17:54.183 "nvme_io_md": false, 00:17:54.183 "write_zeroes": true, 00:17:54.183 "zcopy": false, 00:17:54.183 "get_zone_info": false, 00:17:54.183 "zone_management": false, 00:17:54.183 "zone_append": false, 00:17:54.183 "compare": false, 00:17:54.183 "compare_and_write": false, 00:17:54.183 "abort": false, 00:17:54.183 "seek_hole": false, 00:17:54.183 "seek_data": false, 00:17:54.183 "copy": false, 00:17:54.183 "nvme_iov_md": false 00:17:54.183 }, 00:17:54.183 "driver_specific": { 00:17:54.183 "ftl": { 00:17:54.183 "base_bdev": "6a3cf9bd-9be2-47db-841b-46c6b282e7c1", 00:17:54.183 "cache": "nvc0n1p0" 00:17:54.183 } 00:17:54.183 } 00:17:54.183 } 00:17:54.183 ] 00:17:54.183 22:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:54.183 22:58:33 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:54.183 22:58:33 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:54.442 22:58:33 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:54.442 22:58:33 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:54.442 [2024-12-13 22:58:33.530181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.442 [2024-12-13 22:58:33.530223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:54.442 [2024-12-13 22:58:33.530233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.442 [2024-12-13 22:58:33.530241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.442 [2024-12-13 22:58:33.530271] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.442 [2024-12-13 22:58:33.532370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.442 [2024-12-13 22:58:33.532393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:54.442 [2024-12-13 22:58:33.532404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:17:54.442 [2024-12-13 22:58:33.532410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.442 [2024-12-13 22:58:33.532805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.442 [2024-12-13 22:58:33.532817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:54.442 [2024-12-13 22:58:33.532825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:17:54.442 [2024-12-13 22:58:33.532831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.442 [2024-12-13 22:58:33.535259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.442 [2024-12-13 22:58:33.535277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:54.442 [2024-12-13 22:58:33.535286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.408 ms 00:17:54.442 [2024-12-13 22:58:33.535292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.442 [2024-12-13 22:58:33.539973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.442 [2024-12-13 22:58:33.539996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:54.442 [2024-12-13 22:58:33.540006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.657 ms 00:17:54.442 [2024-12-13 22:58:33.540013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.442 [2024-12-13 22:58:33.558393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.442 [2024-12-13 22:58:33.558506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:54.442 [2024-12-13 22:58:33.558533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.314 ms 00:17:54.442 [2024-12-13 22:58:33.558538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.442 [2024-12-13 22:58:33.570197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.442 [2024-12-13 22:58:33.570225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:54.442 [2024-12-13 22:58:33.570239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.619 ms 00:17:54.442 [2024-12-13 22:58:33.570246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.442 [2024-12-13 22:58:33.570395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.442 [2024-12-13 22:58:33.570404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:54.442 [2024-12-13 22:58:33.570412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:17:54.442 [2024-12-13 22:58:33.570418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 22:58:33.587959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 22:58:33.587983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:54.702 [2024-12-13 22:58:33.587992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.516 ms 00:17:54.702 [2024-12-13 22:58:33.587997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 22:58:33.604929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 22:58:33.605021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:54.702 [2024-12-13 22:58:33.605037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.898 ms 00:17:54.702 [2024-12-13 22:58:33.605043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 22:58:33.621688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 22:58:33.621711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:54.702 [2024-12-13 22:58:33.621720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.611 ms 00:17:54.702 [2024-12-13 22:58:33.621726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 22:58:33.638999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.702 [2024-12-13 22:58:33.639023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:54.702 [2024-12-13 22:58:33.639032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.169 ms 00:17:54.702 [2024-12-13 22:58:33.639037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.702 [2024-12-13 22:58:33.639072] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:54.702 [2024-12-13 22:58:33.639083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:54.702 [2024-12-13 22:58:33.639273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:54.703 [2024-12-13 22:58:33.639778] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:54.703 [2024-12-13 22:58:33.639786] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36df95e5-07d6-432d-bbe7-25bdcbb6495c 00:17:54.703 [2024-12-13 22:58:33.639805] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:54.703 [2024-12-13 22:58:33.639814] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:54.703 [2024-12-13 22:58:33.639819] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:54.703 [2024-12-13 22:58:33.639828] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:54.703 [2024-12-13 22:58:33.639834] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:54.703 [2024-12-13 22:58:33.639841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:54.703 [2024-12-13 22:58:33.639847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:54.703 [2024-12-13 22:58:33.639853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:54.703 [2024-12-13 22:58:33.639858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:54.703 [2024-12-13 22:58:33.639864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.703 [2024-12-13 22:58:33.639870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:54.703 [2024-12-13 22:58:33.639878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:17:54.703 [2024-12-13 22:58:33.639883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.703 [2024-12-13 22:58:33.649439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.703 [2024-12-13 22:58:33.649464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:54.703 [2024-12-13 22:58:33.649472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.524 ms 00:17:54.703 [2024-12-13 22:58:33.649478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.703 [2024-12-13 22:58:33.649752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.703 [2024-12-13 22:58:33.649776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:54.703 [2024-12-13 22:58:33.649784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:17:54.703 [2024-12-13 22:58:33.649790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.703 [2024-12-13 22:58:33.684298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.703 [2024-12-13 22:58:33.684327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.703 [2024-12-13 22:58:33.684337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.684343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.684392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.704 [2024-12-13 22:58:33.684398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.704 [2024-12-13 22:58:33.684406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.684412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.684483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.704 [2024-12-13 22:58:33.684492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.704 [2024-12-13 22:58:33.684500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.684506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.684532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.704 [2024-12-13 22:58:33.684538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.704 [2024-12-13 22:58:33.684545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.684551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.747308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.704 [2024-12-13 22:58:33.747473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.704 [2024-12-13 22:58:33.747490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.747497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.796069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.704 [2024-12-13 22:58:33.796103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.704 [2024-12-13 22:58:33.796113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.796120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.796202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.704 [2024-12-13 22:58:33.796210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.704 [2024-12-13 22:58:33.796220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.796226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.796282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.704 [2024-12-13 22:58:33.796289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.704 [2024-12-13 22:58:33.796296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.796302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.796384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.704 [2024-12-13 22:58:33.796392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.704 [2024-12-13 22:58:33.796399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.796407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.796455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.704 [2024-12-13 22:58:33.796462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:54.704 [2024-12-13 22:58:33.796469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.796475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.796513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.704 [2024-12-13 22:58:33.796519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.704 [2024-12-13 22:58:33.796526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.796532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.796579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.704 [2024-12-13 22:58:33.796586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.704 [2024-12-13 22:58:33.796593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.704 [2024-12-13 22:58:33.796598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.704 [2024-12-13 22:58:33.796728] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.527 ms, result 0 00:17:54.704 true 00:17:54.704 22:58:33 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76870 00:17:54.704 22:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76870 ']' 00:17:54.704 22:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76870 00:17:54.704 22:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:54.704 22:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.704 22:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76870 00:17:54.963 killing process with pid 76870 00:17:54.963 22:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.963 22:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.963 22:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76870' 00:17:54.963 22:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76870 00:17:54.963 22:58:33 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76870 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:01.624 22:58:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:01.624 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:01.624 fio-3.35 00:18:01.624 Starting 1 thread 00:18:06.914 00:18:06.914 test: (groupid=0, jobs=1): err= 0: pid=77049: Fri Dec 13 22:58:45 2024 00:18:06.914 read: IOPS=822, BW=54.6MiB/s (57.3MB/s)(255MiB/4659msec) 00:18:06.914 slat (usec): min=3, max=115, avg= 6.95, stdev= 4.07 00:18:06.914 clat (usec): min=251, max=1695, avg=551.72, stdev=246.03 00:18:06.914 lat (usec): min=256, max=1700, avg=558.67, stdev=247.67 00:18:06.914 clat percentiles (usec): 00:18:06.914 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 302], 20.00th=[ 310], 00:18:06.914 | 30.00th=[ 318], 40.00th=[ 441], 50.00th=[ 523], 60.00th=[ 562], 00:18:06.914 | 70.00th=[ 611], 80.00th=[ 824], 90.00th=[ 930], 95.00th=[ 988], 00:18:06.914 | 99.00th=[ 1172], 99.50th=[ 1254], 99.90th=[ 1483], 99.95th=[ 1565], 00:18:06.914 | 99.99th=[ 1696] 00:18:06.914 write: IOPS=829, BW=55.1MiB/s (57.7MB/s)(256MiB/4651msec); 0 zone resets 00:18:06.914 slat (usec): min=14, max=112, avg=25.09, stdev= 7.49 00:18:06.914 clat (usec): min=301, max=1604, avg=613.37, stdev=259.18 00:18:06.914 lat (usec): min=326, max=1625, avg=638.46, stdev=261.66 00:18:06.914 clat percentiles (usec): 00:18:06.914 | 1.00th=[ 314], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 338], 00:18:06.914 | 30.00th=[ 388], 40.00th=[ 502], 50.00th=[ 603], 60.00th=[ 644], 00:18:06.914 | 70.00th=[ 701], 80.00th=[ 906], 90.00th=[ 1004], 95.00th=[ 1057], 00:18:06.914 | 99.00th=[ 1287], 99.50th=[ 1352], 99.90th=[ 1418], 99.95th=[ 1483], 00:18:06.914 | 99.99th=[ 1598] 00:18:06.914 bw ( KiB/s): min=41208, max=85136, per=100.00%, avg=56579.33, stdev=17076.18, samples=9 00:18:06.914 iops : min= 606, max= 1252, avg=832.00, stdev=251.13, samples=9 00:18:06.914 lat (usec) : 500=42.81%, 750=32.81%, 1000=17.12% 00:18:06.914 lat (msec) : 2=7.26% 00:18:06.914 cpu : usr=99.03%, sys=0.09%, ctx=12, majf=0, minf=1167 00:18:06.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:06.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.914 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:06.914 00:18:06.914 Run status group 0 (all jobs): 00:18:06.914 READ: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=255MiB (267MB), run=4659-4659msec 00:18:06.914 WRITE: bw=55.1MiB/s (57.7MB/s), 55.1MiB/s-55.1MiB/s (57.7MB/s-57.7MB/s), io=256MiB (269MB), run=4651-4651msec 00:18:08.305 ----------------------------------------------------- 00:18:08.305 Suppressions used: 00:18:08.305 count bytes template 00:18:08.305 1 5 /usr/src/fio/parse.c 00:18:08.305 1 8 libtcmalloc_minimal.so 00:18:08.305 1 904 libcrypto.so 00:18:08.305 ----------------------------------------------------- 00:18:08.305 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:08.305 22:58:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:08.305 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:08.305 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:08.305 fio-3.35 00:18:08.305 Starting 2 threads 00:18:34.848 00:18:34.848 first_half: (groupid=0, jobs=1): err= 0: pid=77152: Fri Dec 13 22:59:11 2024 00:18:34.848 read: IOPS=2886, BW=11.3MiB/s (11.8MB/s)(255MiB/22599msec) 00:18:34.848 slat (nsec): min=3070, max=45664, avg=4001.37, stdev=1010.79 00:18:34.848 clat (usec): min=570, max=314665, avg=35040.25, stdev=16487.16 00:18:34.848 lat (usec): min=575, max=314668, avg=35044.25, stdev=16487.28 00:18:34.848 clat percentiles (msec): 00:18:34.848 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:18:34.848 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:18:34.848 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 50], 00:18:34.848 | 99.00th=[ 122], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 255], 00:18:34.848 | 99.99th=[ 309] 00:18:34.848 write: IOPS=3529, BW=13.8MiB/s (14.5MB/s)(256MiB/18568msec); 0 zone resets 00:18:34.848 slat (usec): min=3, max=2329, avg= 6.05, stdev=11.78 00:18:34.848 clat (usec): min=370, max=72409, avg=9214.29, stdev=15028.32 00:18:34.848 lat (usec): min=379, max=72414, avg=9220.34, stdev=15028.53 00:18:34.848 clat percentiles (usec): 00:18:34.848 | 1.00th=[ 676], 5.00th=[ 766], 10.00th=[ 848], 20.00th=[ 1123], 00:18:34.848 | 30.00th=[ 1926], 40.00th=[ 3195], 50.00th=[ 4359], 60.00th=[ 5276], 00:18:34.848 | 70.00th=[ 6128], 80.00th=[11207], 90.00th=[18482], 95.00th=[59507], 00:18:34.848 | 99.00th=[66847], 99.50th=[67634], 99.90th=[70779], 99.95th=[70779], 00:18:34.848 | 99.99th=[71828] 00:18:34.848 bw ( KiB/s): min= 1000, max=42144, per=93.36%, avg=23828.55, stdev=13392.40, samples=22 00:18:34.849 iops : min= 250, max=10536, avg=5957.14, stdev=3348.10, samples=22 00:18:34.849 lat (usec) : 500=0.03%, 750=2.11%, 1000=5.53% 00:18:34.849 lat (msec) : 2=7.73%, 4=8.46%, 10=15.75%, 20=6.94%, 50=47.90% 00:18:34.849 lat (msec) : 100=4.67%, 250=0.86%, 500=0.03% 00:18:34.849 cpu : usr=99.33%, sys=0.13%, ctx=33, majf=0, minf=5597 00:18:34.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:34.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.849 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.849 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.849 second_half: (groupid=0, jobs=1): err= 0: pid=77153: Fri Dec 13 22:59:11 2024 00:18:34.849 read: IOPS=2868, BW=11.2MiB/s (11.8MB/s)(255MiB/22768msec) 00:18:34.849 slat (usec): min=3, max=132, avg= 5.36, stdev= 1.34 00:18:34.849 clat (usec): min=672, max=328733, avg=34401.35, stdev=18047.69 00:18:34.849 lat (usec): min=677, max=328743, avg=34406.71, stdev=18047.76 00:18:34.849 clat percentiles (msec): 00:18:34.849 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 31], 00:18:34.849 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:18:34.849 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 45], 00:18:34.849 | 99.00th=[ 138], 99.50th=[ 155], 99.90th=[ 199], 99.95th=[ 218], 00:18:34.849 | 99.99th=[ 317] 00:18:34.849 write: IOPS=3190, BW=12.5MiB/s (13.1MB/s)(256MiB/20541msec); 0 zone resets 00:18:34.849 slat (usec): min=3, max=316, avg= 7.07, stdev= 3.38 00:18:34.849 clat (usec): min=321, max=72603, avg=10161.40, stdev=16041.92 00:18:34.849 lat (usec): min=337, max=72611, avg=10168.47, stdev=16042.16 00:18:34.849 clat percentiles (usec): 00:18:34.849 | 1.00th=[ 668], 5.00th=[ 783], 10.00th=[ 914], 20.00th=[ 1188], 00:18:34.849 | 30.00th=[ 1582], 40.00th=[ 2933], 50.00th=[ 3916], 60.00th=[ 4948], 00:18:34.849 | 70.00th=[ 6456], 80.00th=[14353], 90.00th=[28705], 95.00th=[60031], 00:18:34.849 | 99.00th=[67634], 99.50th=[68682], 99.90th=[70779], 99.95th=[71828], 00:18:34.849 | 99.99th=[71828] 00:18:34.849 bw ( KiB/s): min= 208, max=65512, per=89.31%, avg=22795.00, stdev=16876.53, samples=23 00:18:34.849 iops : min= 52, max=16378, avg=5698.70, stdev=4219.14, samples=23 00:18:34.849 lat (usec) : 500=0.02%, 750=1.80%, 1000=4.69% 00:18:34.849 lat (msec) : 2=9.83%, 4=9.23%, 10=13.49%, 20=6.82%, 50=48.80% 00:18:34.849 lat (msec) : 100=4.35%, 250=0.95%, 500=0.01% 00:18:34.849 cpu : usr=99.28%, sys=0.09%, ctx=58, majf=0, minf=5516 00:18:34.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:34.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.849 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.849 issued rwts: total=65320,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.849 00:18:34.849 Run status group 0 (all jobs): 00:18:34.849 READ: bw=22.4MiB/s (23.5MB/s), 11.2MiB/s-11.3MiB/s (11.8MB/s-11.8MB/s), io=510MiB (535MB), run=22599-22768msec 00:18:34.849 WRITE: bw=24.9MiB/s (26.1MB/s), 12.5MiB/s-13.8MiB/s (13.1MB/s-14.5MB/s), io=512MiB (537MB), run=18568-20541msec 00:18:34.849 ----------------------------------------------------- 00:18:34.849 Suppressions used: 00:18:34.849 count bytes template 00:18:34.849 2 10 /usr/src/fio/parse.c 00:18:34.849 2 192 /usr/src/fio/iolog.c 00:18:34.849 1 8 libtcmalloc_minimal.so 00:18:34.849 1 904 libcrypto.so 00:18:34.849 ----------------------------------------------------- 00:18:34.849 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:34.849 22:59:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:34.849 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:34.849 fio-3.35 00:18:34.849 Starting 1 thread 00:18:49.736 00:18:49.736 test: (groupid=0, jobs=1): err= 0: pid=77461: Fri Dec 13 22:59:28 2024 00:18:49.736 read: IOPS=8040, BW=31.4MiB/s (32.9MB/s)(255MiB/8109msec) 00:18:49.736 slat (nsec): min=3018, max=41799, avg=3393.86, stdev=753.95 00:18:49.736 clat (usec): min=488, max=32268, avg=15912.42, stdev=1970.14 00:18:49.736 lat (usec): min=495, max=32271, avg=15915.82, stdev=1970.15 00:18:49.736 clat percentiles (usec): 00:18:49.736 | 1.00th=[14091], 5.00th=[14353], 10.00th=[14615], 20.00th=[14746], 00:18:49.736 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15270], 60.00th=[15533], 00:18:49.736 | 70.00th=[15795], 80.00th=[16450], 90.00th=[17957], 95.00th=[20841], 00:18:49.736 | 99.00th=[23200], 99.50th=[23725], 99.90th=[25560], 99.95th=[28181], 00:18:49.736 | 99.99th=[31851] 00:18:49.736 write: IOPS=12.3k, BW=47.9MiB/s (50.2MB/s)(256MiB/5343msec); 0 zone resets 00:18:49.736 slat (usec): min=3, max=660, avg= 5.76, stdev= 4.63 00:18:49.736 clat (usec): min=445, max=55551, avg=10391.07, stdev=11055.14 00:18:49.736 lat (usec): min=457, max=55557, avg=10396.83, stdev=11055.23 00:18:49.736 clat percentiles (usec): 00:18:49.736 | 1.00th=[ 685], 5.00th=[ 873], 10.00th=[ 1004], 20.00th=[ 1172], 00:18:49.736 | 30.00th=[ 1352], 40.00th=[ 2040], 50.00th=[ 7504], 60.00th=[ 9896], 00:18:49.736 | 70.00th=[12387], 80.00th=[16450], 90.00th=[31065], 95.00th=[32900], 00:18:49.736 | 99.00th=[39584], 99.50th=[41157], 99.90th=[50070], 99.95th=[51119], 00:18:49.736 | 99.99th=[53740] 00:18:49.736 bw ( KiB/s): min=31504, max=64544, per=97.14%, avg=47662.55, stdev=9945.33, samples=11 00:18:49.736 iops : min= 7876, max=16136, avg=11915.64, stdev=2486.33, samples=11 00:18:49.736 lat (usec) : 500=0.01%, 750=1.01%, 1000=3.96% 00:18:49.736 lat (msec) : 2=15.00%, 4=1.12%, 10=9.36%, 20=57.97%, 50=11.51% 00:18:49.736 lat (msec) : 100=0.06% 00:18:49.736 cpu : usr=99.00%, sys=0.17%, ctx=25, majf=0, minf=5563 00:18:49.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:49.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.736 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.736 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.736 00:18:49.736 Run status group 0 (all jobs): 00:18:49.736 READ: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=255MiB (267MB), run=8109-8109msec 00:18:49.736 WRITE: bw=47.9MiB/s (50.2MB/s), 47.9MiB/s-47.9MiB/s (50.2MB/s-50.2MB/s), io=256MiB (268MB), run=5343-5343msec 00:18:51.123 ----------------------------------------------------- 00:18:51.123 Suppressions used: 00:18:51.123 count bytes template 00:18:51.123 1 5 /usr/src/fio/parse.c 00:18:51.123 2 192 /usr/src/fio/iolog.c 00:18:51.123 1 8 libtcmalloc_minimal.so 00:18:51.123 1 904 libcrypto.so 00:18:51.123 ----------------------------------------------------- 00:18:51.123 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:51.123 Remove shared memory files 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58916 /dev/shm/spdk_tgt_trace.pid75797 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:51.123 ************************************ 00:18:51.123 END TEST ftl_fio_basic 00:18:51.123 ************************************ 00:18:51.123 00:18:51.123 real 1m4.014s 00:18:51.123 user 2m14.950s 00:18:51.123 sys 0m2.876s 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.123 22:59:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:51.123 22:59:30 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:51.123 22:59:30 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:51.123 22:59:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.123 22:59:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:51.123 ************************************ 00:18:51.123 START TEST ftl_bdevperf 00:18:51.123 ************************************ 00:18:51.123 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:51.385 * Looking for test storage... 00:18:51.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:51.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.385 --rc genhtml_branch_coverage=1 00:18:51.385 --rc genhtml_function_coverage=1 00:18:51.385 --rc genhtml_legend=1 00:18:51.385 --rc geninfo_all_blocks=1 00:18:51.385 --rc geninfo_unexecuted_blocks=1 00:18:51.385 00:18:51.385 ' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:51.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.385 --rc genhtml_branch_coverage=1 00:18:51.385 --rc genhtml_function_coverage=1 00:18:51.385 --rc genhtml_legend=1 00:18:51.385 --rc geninfo_all_blocks=1 00:18:51.385 --rc geninfo_unexecuted_blocks=1 00:18:51.385 00:18:51.385 ' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:51.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.385 --rc genhtml_branch_coverage=1 00:18:51.385 --rc genhtml_function_coverage=1 00:18:51.385 --rc genhtml_legend=1 00:18:51.385 --rc geninfo_all_blocks=1 00:18:51.385 --rc geninfo_unexecuted_blocks=1 00:18:51.385 00:18:51.385 ' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:51.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.385 --rc genhtml_branch_coverage=1 00:18:51.385 --rc genhtml_function_coverage=1 00:18:51.385 --rc genhtml_legend=1 00:18:51.385 --rc geninfo_all_blocks=1 00:18:51.385 --rc geninfo_unexecuted_blocks=1 00:18:51.385 00:18:51.385 ' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77705 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77705 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77705 ']' 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:51.385 22:59:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:51.385 [2024-12-13 22:59:30.480397] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:51.385 [2024-12-13 22:59:30.480913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77705 ] 00:18:51.647 [2024-12-13 22:59:30.646684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.647 [2024-12-13 22:59:30.768740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.220 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.220 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:52.220 22:59:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:52.220 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:52.220 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:52.220 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:52.220 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:52.220 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:52.483 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:52.483 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:52.483 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:52.483 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:52.483 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:52.483 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:52.483 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:52.483 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:52.745 { 00:18:52.745 "name": "nvme0n1", 00:18:52.745 "aliases": [ 00:18:52.745 "415a1151-80d4-4e9a-8193-d0b27e3cc502" 00:18:52.745 ], 00:18:52.745 "product_name": "NVMe disk", 00:18:52.745 "block_size": 4096, 00:18:52.745 "num_blocks": 1310720, 00:18:52.745 "uuid": "415a1151-80d4-4e9a-8193-d0b27e3cc502", 00:18:52.745 "numa_id": -1, 00:18:52.745 "assigned_rate_limits": { 00:18:52.745 "rw_ios_per_sec": 0, 00:18:52.745 "rw_mbytes_per_sec": 0, 00:18:52.745 "r_mbytes_per_sec": 0, 00:18:52.745 "w_mbytes_per_sec": 0 00:18:52.745 }, 00:18:52.745 "claimed": true, 00:18:52.745 "claim_type": "read_many_write_one", 00:18:52.745 "zoned": false, 00:18:52.745 "supported_io_types": { 00:18:52.745 "read": true, 00:18:52.745 "write": true, 00:18:52.745 "unmap": true, 00:18:52.745 "flush": true, 00:18:52.745 "reset": true, 00:18:52.745 "nvme_admin": true, 00:18:52.745 "nvme_io": true, 00:18:52.745 "nvme_io_md": false, 00:18:52.745 "write_zeroes": true, 00:18:52.745 "zcopy": false, 00:18:52.745 "get_zone_info": false, 00:18:52.745 "zone_management": false, 00:18:52.745 "zone_append": false, 00:18:52.745 "compare": true, 00:18:52.745 "compare_and_write": false, 00:18:52.745 "abort": true, 00:18:52.745 "seek_hole": false, 00:18:52.745 "seek_data": false, 00:18:52.745 "copy": true, 00:18:52.745 "nvme_iov_md": false 00:18:52.745 }, 00:18:52.745 "driver_specific": { 00:18:52.745 "nvme": [ 00:18:52.745 { 00:18:52.745 "pci_address": "0000:00:11.0", 00:18:52.745 "trid": { 00:18:52.745 "trtype": "PCIe", 00:18:52.745 "traddr": "0000:00:11.0" 00:18:52.745 }, 00:18:52.745 "ctrlr_data": { 00:18:52.745 "cntlid": 0, 00:18:52.745 "vendor_id": "0x1b36", 00:18:52.745 "model_number": "QEMU NVMe Ctrl", 00:18:52.745 "serial_number": "12341", 00:18:52.745 "firmware_revision": "8.0.0", 00:18:52.745 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:52.745 "oacs": { 00:18:52.745 "security": 0, 00:18:52.745 "format": 1, 00:18:52.745 "firmware": 0, 00:18:52.745 "ns_manage": 1 00:18:52.745 }, 00:18:52.745 "multi_ctrlr": false, 00:18:52.745 "ana_reporting": false 00:18:52.745 }, 00:18:52.745 "vs": { 00:18:52.745 "nvme_version": "1.4" 00:18:52.745 }, 00:18:52.745 "ns_data": { 00:18:52.745 "id": 1, 00:18:52.745 "can_share": false 00:18:52.745 } 00:18:52.745 } 00:18:52.745 ], 00:18:52.745 "mp_policy": "active_passive" 00:18:52.745 } 00:18:52.745 } 00:18:52.745 ]' 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:52.745 22:59:31 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:53.007 22:59:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=44b66f9a-fb67-4bc4-b93c-69094ea4c564 00:18:53.007 22:59:32 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:53.007 22:59:32 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 44b66f9a-fb67-4bc4-b93c-69094ea4c564 00:18:53.268 22:59:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:53.529 22:59:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=62c989e6-2dee-41a9-89cd-1b6512cd1464 00:18:53.530 22:59:32 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 62c989e6-2dee-41a9-89cd-1b6512cd1464 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:53.791 22:59:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:54.053 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:54.053 { 00:18:54.053 "name": "71dcb2ad-9a69-4d45-a040-6cadf22aece9", 00:18:54.053 "aliases": [ 00:18:54.053 "lvs/nvme0n1p0" 00:18:54.053 ], 00:18:54.053 "product_name": "Logical Volume", 00:18:54.053 "block_size": 4096, 00:18:54.053 "num_blocks": 26476544, 00:18:54.053 "uuid": "71dcb2ad-9a69-4d45-a040-6cadf22aece9", 00:18:54.053 "assigned_rate_limits": { 00:18:54.053 "rw_ios_per_sec": 0, 00:18:54.053 "rw_mbytes_per_sec": 0, 00:18:54.053 "r_mbytes_per_sec": 0, 00:18:54.053 "w_mbytes_per_sec": 0 00:18:54.053 }, 00:18:54.053 "claimed": false, 00:18:54.053 "zoned": false, 00:18:54.053 "supported_io_types": { 00:18:54.053 "read": true, 00:18:54.053 "write": true, 00:18:54.053 "unmap": true, 00:18:54.053 "flush": false, 00:18:54.053 "reset": true, 00:18:54.053 "nvme_admin": false, 00:18:54.053 "nvme_io": false, 00:18:54.053 "nvme_io_md": false, 00:18:54.053 "write_zeroes": true, 00:18:54.053 "zcopy": false, 00:18:54.053 "get_zone_info": false, 00:18:54.053 "zone_management": false, 00:18:54.053 "zone_append": false, 00:18:54.053 "compare": false, 00:18:54.053 "compare_and_write": false, 00:18:54.053 "abort": false, 00:18:54.053 "seek_hole": true, 00:18:54.053 "seek_data": true, 00:18:54.053 "copy": false, 00:18:54.053 "nvme_iov_md": false 00:18:54.053 }, 00:18:54.053 "driver_specific": { 00:18:54.053 "lvol": { 00:18:54.053 "lvol_store_uuid": "62c989e6-2dee-41a9-89cd-1b6512cd1464", 00:18:54.053 "base_bdev": "nvme0n1", 00:18:54.053 "thin_provision": true, 00:18:54.053 "num_allocated_clusters": 0, 00:18:54.053 "snapshot": false, 00:18:54.053 "clone": false, 00:18:54.053 "esnap_clone": false 00:18:54.053 } 00:18:54.053 } 00:18:54.053 } 00:18:54.053 ]' 00:18:54.053 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:54.053 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:54.053 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:54.053 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:54.053 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:54.053 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:54.053 22:59:33 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:54.053 22:59:33 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:54.053 22:59:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:54.315 22:59:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:54.315 22:59:33 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:54.315 22:59:33 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:54.315 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:54.315 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:54.315 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:54.315 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:54.315 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:54.576 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:54.576 { 00:18:54.576 "name": "71dcb2ad-9a69-4d45-a040-6cadf22aece9", 00:18:54.576 "aliases": [ 00:18:54.576 "lvs/nvme0n1p0" 00:18:54.576 ], 00:18:54.576 "product_name": "Logical Volume", 00:18:54.576 "block_size": 4096, 00:18:54.576 "num_blocks": 26476544, 00:18:54.576 "uuid": "71dcb2ad-9a69-4d45-a040-6cadf22aece9", 00:18:54.576 "assigned_rate_limits": { 00:18:54.576 "rw_ios_per_sec": 0, 00:18:54.576 "rw_mbytes_per_sec": 0, 00:18:54.576 "r_mbytes_per_sec": 0, 00:18:54.576 "w_mbytes_per_sec": 0 00:18:54.576 }, 00:18:54.576 "claimed": false, 00:18:54.576 "zoned": false, 00:18:54.576 "supported_io_types": { 00:18:54.576 "read": true, 00:18:54.576 "write": true, 00:18:54.576 "unmap": true, 00:18:54.576 "flush": false, 00:18:54.576 "reset": true, 00:18:54.576 "nvme_admin": false, 00:18:54.576 "nvme_io": false, 00:18:54.576 "nvme_io_md": false, 00:18:54.576 "write_zeroes": true, 00:18:54.576 "zcopy": false, 00:18:54.576 "get_zone_info": false, 00:18:54.576 "zone_management": false, 00:18:54.576 "zone_append": false, 00:18:54.576 "compare": false, 00:18:54.576 "compare_and_write": false, 00:18:54.576 "abort": false, 00:18:54.576 "seek_hole": true, 00:18:54.576 "seek_data": true, 00:18:54.576 "copy": false, 00:18:54.576 "nvme_iov_md": false 00:18:54.576 }, 00:18:54.576 "driver_specific": { 00:18:54.576 "lvol": { 00:18:54.576 "lvol_store_uuid": "62c989e6-2dee-41a9-89cd-1b6512cd1464", 00:18:54.576 "base_bdev": "nvme0n1", 00:18:54.576 "thin_provision": true, 00:18:54.576 "num_allocated_clusters": 0, 00:18:54.576 "snapshot": false, 00:18:54.576 "clone": false, 00:18:54.576 "esnap_clone": false 00:18:54.576 } 00:18:54.576 } 00:18:54.576 } 00:18:54.576 ]' 00:18:54.576 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:54.576 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:54.576 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:54.576 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:54.576 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:54.576 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:54.576 22:59:33 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:54.576 22:59:33 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:54.836 22:59:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:54.836 22:59:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:54.836 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:54.836 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:54.836 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:54.836 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:54.836 22:59:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71dcb2ad-9a69-4d45-a040-6cadf22aece9 00:18:55.094 22:59:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:55.094 { 00:18:55.094 "name": "71dcb2ad-9a69-4d45-a040-6cadf22aece9", 00:18:55.094 "aliases": [ 00:18:55.094 "lvs/nvme0n1p0" 00:18:55.094 ], 00:18:55.094 "product_name": "Logical Volume", 00:18:55.094 "block_size": 4096, 00:18:55.094 "num_blocks": 26476544, 00:18:55.094 "uuid": "71dcb2ad-9a69-4d45-a040-6cadf22aece9", 00:18:55.094 "assigned_rate_limits": { 00:18:55.094 "rw_ios_per_sec": 0, 00:18:55.094 "rw_mbytes_per_sec": 0, 00:18:55.094 "r_mbytes_per_sec": 0, 00:18:55.094 "w_mbytes_per_sec": 0 00:18:55.094 }, 00:18:55.094 "claimed": false, 00:18:55.094 "zoned": false, 00:18:55.094 "supported_io_types": { 00:18:55.094 "read": true, 00:18:55.094 "write": true, 00:18:55.094 "unmap": true, 00:18:55.094 "flush": false, 00:18:55.094 "reset": true, 00:18:55.094 "nvme_admin": false, 00:18:55.094 "nvme_io": false, 00:18:55.094 "nvme_io_md": false, 00:18:55.094 "write_zeroes": true, 00:18:55.094 "zcopy": false, 00:18:55.094 "get_zone_info": false, 00:18:55.094 "zone_management": false, 00:18:55.094 "zone_append": false, 00:18:55.094 "compare": false, 00:18:55.094 "compare_and_write": false, 00:18:55.094 "abort": false, 00:18:55.094 "seek_hole": true, 00:18:55.094 "seek_data": true, 00:18:55.094 "copy": false, 00:18:55.094 "nvme_iov_md": false 00:18:55.094 }, 00:18:55.094 "driver_specific": { 00:18:55.094 "lvol": { 00:18:55.095 "lvol_store_uuid": "62c989e6-2dee-41a9-89cd-1b6512cd1464", 00:18:55.095 "base_bdev": "nvme0n1", 00:18:55.095 "thin_provision": true, 00:18:55.095 "num_allocated_clusters": 0, 00:18:55.095 "snapshot": false, 00:18:55.095 "clone": false, 00:18:55.095 "esnap_clone": false 00:18:55.095 } 00:18:55.095 } 00:18:55.095 } 00:18:55.095 ]' 00:18:55.095 22:59:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:55.095 22:59:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:55.095 22:59:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:55.095 22:59:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:55.095 22:59:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:55.095 22:59:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:55.095 22:59:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:55.095 22:59:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 71dcb2ad-9a69-4d45-a040-6cadf22aece9 -c nvc0n1p0 --l2p_dram_limit 20 00:18:55.356 [2024-12-13 22:59:34.293097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.356 [2024-12-13 22:59:34.293253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:55.356 [2024-12-13 22:59:34.293271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:55.356 [2024-12-13 22:59:34.293279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.356 [2024-12-13 22:59:34.293329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.356 [2024-12-13 22:59:34.293338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:55.356 [2024-12-13 22:59:34.293349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:55.356 [2024-12-13 22:59:34.293356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.356 [2024-12-13 22:59:34.293370] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:55.356 [2024-12-13 22:59:34.293923] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:55.356 [2024-12-13 22:59:34.293937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.356 [2024-12-13 22:59:34.293944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:55.356 [2024-12-13 22:59:34.293951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:18:55.356 [2024-12-13 22:59:34.293958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.356 [2024-12-13 22:59:34.294005] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5fde6870-7cb9-424a-9436-1d922db96713 00:18:55.356 [2024-12-13 22:59:34.294958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.356 [2024-12-13 22:59:34.294979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:55.356 [2024-12-13 22:59:34.294990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:55.356 [2024-12-13 22:59:34.294996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.356 [2024-12-13 22:59:34.299742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.356 [2024-12-13 22:59:34.299789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:55.356 [2024-12-13 22:59:34.299799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.715 ms 00:18:55.356 [2024-12-13 22:59:34.299807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.356 [2024-12-13 22:59:34.299887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.356 [2024-12-13 22:59:34.299900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:55.356 [2024-12-13 22:59:34.299911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:55.356 [2024-12-13 22:59:34.299917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.356 [2024-12-13 22:59:34.299953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.356 [2024-12-13 22:59:34.299960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:55.356 [2024-12-13 22:59:34.299968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:55.356 [2024-12-13 22:59:34.299973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.356 [2024-12-13 22:59:34.299991] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:55.356 [2024-12-13 22:59:34.302823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.356 [2024-12-13 22:59:34.302850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:55.356 [2024-12-13 22:59:34.302857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:18:55.356 [2024-12-13 22:59:34.302868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.356 [2024-12-13 22:59:34.302893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.356 [2024-12-13 22:59:34.302900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:55.356 [2024-12-13 22:59:34.302906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:55.356 [2024-12-13 22:59:34.302913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.356 [2024-12-13 22:59:34.302930] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:55.356 [2024-12-13 22:59:34.303049] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:55.356 [2024-12-13 22:59:34.303059] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:55.356 [2024-12-13 22:59:34.303068] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:55.356 [2024-12-13 22:59:34.303076] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:55.356 [2024-12-13 22:59:34.303084] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:55.356 [2024-12-13 22:59:34.303090] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:55.356 [2024-12-13 22:59:34.303097] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:55.356 [2024-12-13 22:59:34.303102] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:55.356 [2024-12-13 22:59:34.303110] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:55.356 [2024-12-13 22:59:34.303117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.356 [2024-12-13 22:59:34.303125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:55.356 [2024-12-13 22:59:34.303131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:18:55.356 [2024-12-13 22:59:34.303138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.356 [2024-12-13 22:59:34.303201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.356 [2024-12-13 22:59:34.303209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:55.356 [2024-12-13 22:59:34.303214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:55.356 [2024-12-13 22:59:34.303222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.356 [2024-12-13 22:59:34.303289] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:55.356 [2024-12-13 22:59:34.303299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:55.356 [2024-12-13 22:59:34.303305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:55.356 [2024-12-13 22:59:34.303312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:55.357 [2024-12-13 22:59:34.303325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:55.357 [2024-12-13 22:59:34.303336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:55.357 [2024-12-13 22:59:34.303341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:55.357 [2024-12-13 22:59:34.303352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:55.357 [2024-12-13 22:59:34.303364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:55.357 [2024-12-13 22:59:34.303369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:55.357 [2024-12-13 22:59:34.303376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:55.357 [2024-12-13 22:59:34.303381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:55.357 [2024-12-13 22:59:34.303392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:55.357 [2024-12-13 22:59:34.303404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:55.357 [2024-12-13 22:59:34.303409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:55.357 [2024-12-13 22:59:34.303420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.357 [2024-12-13 22:59:34.303432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:55.357 [2024-12-13 22:59:34.303438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.357 [2024-12-13 22:59:34.303450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:55.357 [2024-12-13 22:59:34.303455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.357 [2024-12-13 22:59:34.303466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:55.357 [2024-12-13 22:59:34.303472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:55.357 [2024-12-13 22:59:34.303484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:55.357 [2024-12-13 22:59:34.303490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:55.357 [2024-12-13 22:59:34.303501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:55.357 [2024-12-13 22:59:34.303507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:55.357 [2024-12-13 22:59:34.303511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:55.357 [2024-12-13 22:59:34.303519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:55.357 [2024-12-13 22:59:34.303524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:55.357 [2024-12-13 22:59:34.303529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:55.357 [2024-12-13 22:59:34.303540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:55.357 [2024-12-13 22:59:34.303545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303551] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:55.357 [2024-12-13 22:59:34.303557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:55.357 [2024-12-13 22:59:34.303564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:55.357 [2024-12-13 22:59:34.303570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:55.357 [2024-12-13 22:59:34.303580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:55.357 [2024-12-13 22:59:34.303585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:55.357 [2024-12-13 22:59:34.303591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:55.357 [2024-12-13 22:59:34.303596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:55.357 [2024-12-13 22:59:34.303603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:55.357 [2024-12-13 22:59:34.303608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:55.357 [2024-12-13 22:59:34.303615] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:55.357 [2024-12-13 22:59:34.303622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:55.357 [2024-12-13 22:59:34.303629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:55.357 [2024-12-13 22:59:34.303635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:55.357 [2024-12-13 22:59:34.303642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:55.357 [2024-12-13 22:59:34.303647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:55.357 [2024-12-13 22:59:34.303654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:55.357 [2024-12-13 22:59:34.303659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:55.357 [2024-12-13 22:59:34.303666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:55.357 [2024-12-13 22:59:34.303671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:55.357 [2024-12-13 22:59:34.303680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:55.357 [2024-12-13 22:59:34.303685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:55.357 [2024-12-13 22:59:34.303691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:55.357 [2024-12-13 22:59:34.303697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:55.357 [2024-12-13 22:59:34.303704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:55.357 [2024-12-13 22:59:34.303709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:55.357 [2024-12-13 22:59:34.303716] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:55.357 [2024-12-13 22:59:34.303722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:55.357 [2024-12-13 22:59:34.303731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:55.357 [2024-12-13 22:59:34.303736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:55.357 [2024-12-13 22:59:34.303743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:55.357 [2024-12-13 22:59:34.303748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:55.357 [2024-12-13 22:59:34.303764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.357 [2024-12-13 22:59:34.303778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:55.357 [2024-12-13 22:59:34.303784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:18:55.357 [2024-12-13 22:59:34.303790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.357 [2024-12-13 22:59:34.303818] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:55.357 [2024-12-13 22:59:34.303825] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:59.567 [2024-12-13 22:59:38.346780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.346863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:59.567 [2024-12-13 22:59:38.346885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4042.940 ms 00:18:59.567 [2024-12-13 22:59:38.346895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.379073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.379139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.567 [2024-12-13 22:59:38.379158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.948 ms 00:18:59.567 [2024-12-13 22:59:38.379167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.379313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.379325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:59.567 [2024-12-13 22:59:38.379340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:59.567 [2024-12-13 22:59:38.379348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.427988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.428051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.567 [2024-12-13 22:59:38.428070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.603 ms 00:18:59.567 [2024-12-13 22:59:38.428079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.428129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.428139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.567 [2024-12-13 22:59:38.428151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:59.567 [2024-12-13 22:59:38.428162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.428797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.428823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.567 [2024-12-13 22:59:38.428837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:18:59.567 [2024-12-13 22:59:38.428845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.428968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.428978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.567 [2024-12-13 22:59:38.428992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:59.567 [2024-12-13 22:59:38.429000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.445057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.445105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.567 [2024-12-13 22:59:38.445120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.034 ms 00:18:59.567 [2024-12-13 22:59:38.445137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.458644] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:59.567 [2024-12-13 22:59:38.466742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.466802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:59.567 [2024-12-13 22:59:38.466815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.502 ms 00:18:59.567 [2024-12-13 22:59:38.466826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.568295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.568372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:59.567 [2024-12-13 22:59:38.568387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.436 ms 00:18:59.567 [2024-12-13 22:59:38.568398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.568599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.568618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:59.567 [2024-12-13 22:59:38.568627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:18:59.567 [2024-12-13 22:59:38.568641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.595732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.595808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:59.567 [2024-12-13 22:59:38.595823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.037 ms 00:18:59.567 [2024-12-13 22:59:38.595835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.621201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.621253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:59.567 [2024-12-13 22:59:38.621265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.315 ms 00:18:59.567 [2024-12-13 22:59:38.621275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.567 [2024-12-13 22:59:38.621920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.567 [2024-12-13 22:59:38.621945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:59.567 [2024-12-13 22:59:38.621955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:18:59.567 [2024-12-13 22:59:38.621965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.828 [2024-12-13 22:59:38.710163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.828 [2024-12-13 22:59:38.710226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:59.828 [2024-12-13 22:59:38.710240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.156 ms 00:18:59.828 [2024-12-13 22:59:38.710252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.828 [2024-12-13 22:59:38.738263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.828 [2024-12-13 22:59:38.738319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:59.828 [2024-12-13 22:59:38.738336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.932 ms 00:18:59.828 [2024-12-13 22:59:38.738346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.828 [2024-12-13 22:59:38.764748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.828 [2024-12-13 22:59:38.764811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:59.828 [2024-12-13 22:59:38.764824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.353 ms 00:18:59.828 [2024-12-13 22:59:38.764833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.828 [2024-12-13 22:59:38.791294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.828 [2024-12-13 22:59:38.791354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:59.828 [2024-12-13 22:59:38.791368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.412 ms 00:18:59.828 [2024-12-13 22:59:38.791377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.828 [2024-12-13 22:59:38.791430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.828 [2024-12-13 22:59:38.791446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:59.828 [2024-12-13 22:59:38.791455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:59.828 [2024-12-13 22:59:38.791465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.828 [2024-12-13 22:59:38.791555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.828 [2024-12-13 22:59:38.791570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:59.828 [2024-12-13 22:59:38.791578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:59.828 [2024-12-13 22:59:38.791588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.828 [2024-12-13 22:59:38.792787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4499.179 ms, result 0 00:18:59.828 { 00:18:59.828 "name": "ftl0", 00:18:59.828 "uuid": "5fde6870-7cb9-424a-9436-1d922db96713" 00:18:59.828 } 00:18:59.828 22:59:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:59.828 22:59:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:59.828 22:59:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:00.089 22:59:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:00.089 [2024-12-13 22:59:39.056720] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:00.089 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:00.089 Zero copy mechanism will not be used. 00:19:00.089 Running I/O for 4 seconds... 00:19:01.973 829.00 IOPS, 55.05 MiB/s [2024-12-13T22:59:42.499Z] 843.50 IOPS, 56.01 MiB/s [2024-12-13T22:59:43.072Z] 786.33 IOPS, 52.22 MiB/s [2024-12-13T22:59:43.072Z] 758.75 IOPS, 50.39 MiB/s 00:19:03.932 Latency(us) 00:19:03.932 [2024-12-13T22:59:43.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.932 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:03.932 ftl0 : 4.00 758.61 50.38 0.00 0.00 1398.67 371.79 3579.27 00:19:03.932 [2024-12-13T22:59:43.072Z] =================================================================================================================== 00:19:03.932 [2024-12-13T22:59:43.072Z] Total : 758.61 50.38 0.00 0.00 1398.67 371.79 3579.27 00:19:03.932 [2024-12-13 22:59:43.067295] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:03.932 { 00:19:03.932 "results": [ 00:19:03.932 { 00:19:03.932 "job": "ftl0", 00:19:03.932 "core_mask": "0x1", 00:19:03.932 "workload": "randwrite", 00:19:03.932 "status": "finished", 00:19:03.932 "queue_depth": 1, 00:19:03.932 "io_size": 69632, 00:19:03.932 "runtime": 4.002063, 00:19:03.932 "iops": 758.608747538457, 00:19:03.932 "mibps": 50.37636214122566, 00:19:03.932 "io_failed": 0, 00:19:03.932 "io_timeout": 0, 00:19:03.932 "avg_latency_us": 1398.6672990777338, 00:19:03.932 "min_latency_us": 371.79076923076923, 00:19:03.932 "max_latency_us": 3579.273846153846 00:19:03.932 } 00:19:03.932 ], 00:19:03.932 "core_count": 1 00:19:03.932 } 00:19:04.192 22:59:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:04.192 [2024-12-13 22:59:43.179284] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:04.192 Running I/O for 4 seconds... 00:19:06.080 8318.00 IOPS, 32.49 MiB/s [2024-12-13T22:59:46.609Z] 6562.00 IOPS, 25.63 MiB/s [2024-12-13T22:59:47.553Z] 6266.00 IOPS, 24.48 MiB/s [2024-12-13T22:59:47.553Z] 6263.00 IOPS, 24.46 MiB/s 00:19:08.413 Latency(us) 00:19:08.413 [2024-12-13T22:59:47.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.413 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:08.413 ftl0 : 4.04 6237.72 24.37 0.00 0.00 20422.07 263.09 46177.67 00:19:08.413 [2024-12-13T22:59:47.553Z] =================================================================================================================== 00:19:08.413 [2024-12-13T22:59:47.553Z] Total : 6237.72 24.37 0.00 0.00 20422.07 0.00 46177.67 00:19:08.413 [2024-12-13 22:59:47.226698] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:08.413 { 00:19:08.413 "results": [ 00:19:08.413 { 00:19:08.413 "job": "ftl0", 00:19:08.413 "core_mask": "0x1", 00:19:08.413 "workload": "randwrite", 00:19:08.413 "status": "finished", 00:19:08.413 "queue_depth": 128, 00:19:08.413 "io_size": 4096, 00:19:08.413 "runtime": 4.036731, 00:19:08.413 "iops": 6237.7205714227675, 00:19:08.413 "mibps": 24.366095982120186, 00:19:08.413 "io_failed": 0, 00:19:08.413 "io_timeout": 0, 00:19:08.413 "avg_latency_us": 20422.069610313436, 00:19:08.413 "min_latency_us": 263.08923076923077, 00:19:08.413 "max_latency_us": 46177.67384615385 00:19:08.413 } 00:19:08.413 ], 00:19:08.413 "core_count": 1 00:19:08.413 } 00:19:08.413 22:59:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:08.413 [2024-12-13 22:59:47.355160] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:08.413 Running I/O for 4 seconds... 00:19:10.302 5493.00 IOPS, 21.46 MiB/s [2024-12-13T22:59:50.447Z] 5186.50 IOPS, 20.26 MiB/s [2024-12-13T22:59:51.391Z] 5151.33 IOPS, 20.12 MiB/s [2024-12-13T22:59:51.391Z] 5096.50 IOPS, 19.91 MiB/s 00:19:12.251 Latency(us) 00:19:12.251 [2024-12-13T22:59:51.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.251 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:12.251 Verification LBA range: start 0x0 length 0x1400000 00:19:12.251 ftl0 : 4.01 5113.46 19.97 0.00 0.00 24965.45 278.84 42951.29 00:19:12.251 [2024-12-13T22:59:51.391Z] =================================================================================================================== 00:19:12.251 [2024-12-13T22:59:51.391Z] Total : 5113.46 19.97 0.00 0.00 24965.45 0.00 42951.29 00:19:12.251 { 00:19:12.251 "results": [ 00:19:12.251 { 00:19:12.251 "job": "ftl0", 00:19:12.251 "core_mask": "0x1", 00:19:12.251 "workload": "verify", 00:19:12.251 "status": "finished", 00:19:12.251 "verify_range": { 00:19:12.251 "start": 0, 00:19:12.251 "length": 20971520 00:19:12.251 }, 00:19:12.251 "queue_depth": 128, 00:19:12.251 "io_size": 4096, 00:19:12.251 "runtime": 4.011572, 00:19:12.251 "iops": 5113.456769565647, 00:19:12.251 "mibps": 19.97444050611581, 00:19:12.251 "io_failed": 0, 00:19:12.251 "io_timeout": 0, 00:19:12.251 "avg_latency_us": 24965.45292508691, 00:19:12.251 "min_latency_us": 278.8430769230769, 00:19:12.251 "max_latency_us": 42951.28615384615 00:19:12.251 } 00:19:12.251 ], 00:19:12.251 "core_count": 1 00:19:12.251 } 00:19:12.251 [2024-12-13 22:59:51.386850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:12.511 22:59:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:12.511 [2024-12-13 22:59:51.597848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.511 [2024-12-13 22:59:51.598086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:12.511 [2024-12-13 22:59:51.598111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:12.511 [2024-12-13 22:59:51.598123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.511 [2024-12-13 22:59:51.598154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.511 [2024-12-13 22:59:51.601208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.511 [2024-12-13 22:59:51.601374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:12.511 [2024-12-13 22:59:51.601399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.031 ms 00:19:12.511 [2024-12-13 22:59:51.601408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.511 [2024-12-13 22:59:51.604313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.511 [2024-12-13 22:59:51.604362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:12.511 [2024-12-13 22:59:51.604376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:19:12.511 [2024-12-13 22:59:51.604389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.770 [2024-12-13 22:59:51.834237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.770 [2024-12-13 22:59:51.834448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:12.770 [2024-12-13 22:59:51.834536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 229.820 ms 00:19:12.770 [2024-12-13 22:59:51.834564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.771 [2024-12-13 22:59:51.840802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.771 [2024-12-13 22:59:51.840969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:12.771 [2024-12-13 22:59:51.840996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.178 ms 00:19:12.771 [2024-12-13 22:59:51.841009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.771 [2024-12-13 22:59:51.867942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.771 [2024-12-13 22:59:51.868123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:12.771 [2024-12-13 22:59:51.868150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.842 ms 00:19:12.771 [2024-12-13 22:59:51.868159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.771 [2024-12-13 22:59:51.886029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.771 [2024-12-13 22:59:51.886081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:12.771 [2024-12-13 22:59:51.886096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.826 ms 00:19:12.771 [2024-12-13 22:59:51.886104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.771 [2024-12-13 22:59:51.886283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.771 [2024-12-13 22:59:51.886296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:12.771 [2024-12-13 22:59:51.886311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:12.771 [2024-12-13 22:59:51.886320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.033 [2024-12-13 22:59:51.912475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.033 [2024-12-13 22:59:51.912520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:13.033 [2024-12-13 22:59:51.912535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.136 ms 00:19:13.033 [2024-12-13 22:59:51.912543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.033 [2024-12-13 22:59:51.937649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.033 [2024-12-13 22:59:51.937693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:13.033 [2024-12-13 22:59:51.937708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.055 ms 00:19:13.033 [2024-12-13 22:59:51.937715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.033 [2024-12-13 22:59:51.963231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.033 [2024-12-13 22:59:51.963276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:13.033 [2024-12-13 22:59:51.963291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.454 ms 00:19:13.033 [2024-12-13 22:59:51.963298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.033 [2024-12-13 22:59:51.988212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.033 [2024-12-13 22:59:51.988256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:13.033 [2024-12-13 22:59:51.988275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.805 ms 00:19:13.033 [2024-12-13 22:59:51.988282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.033 [2024-12-13 22:59:51.988329] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:13.033 [2024-12-13 22:59:51.988345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-12-13 22:59:51.988359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-12-13 22:59:51.988367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-12-13 22:59:51.988377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-12-13 22:59:51.988385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-12-13 22:59:51.988395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.988993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-12-13 22:59:51.989300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:13.035 [2024-12-13 22:59:51.989308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:13.035 [2024-12-13 22:59:51.989317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:13.035 [2024-12-13 22:59:51.989333] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:13.035 [2024-12-13 22:59:51.989342] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5fde6870-7cb9-424a-9436-1d922db96713 00:19:13.035 [2024-12-13 22:59:51.989353] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:13.035 [2024-12-13 22:59:51.989362] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:13.035 [2024-12-13 22:59:51.989370] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:13.035 [2024-12-13 22:59:51.989380] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:13.035 [2024-12-13 22:59:51.989387] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:13.035 [2024-12-13 22:59:51.989397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:13.035 [2024-12-13 22:59:51.989404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:13.035 [2024-12-13 22:59:51.989414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:13.035 [2024-12-13 22:59:51.989420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:13.035 [2024-12-13 22:59:51.989430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.035 [2024-12-13 22:59:51.989438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:13.035 [2024-12-13 22:59:51.989449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:19:13.035 [2024-12-13 22:59:51.989457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.035 [2024-12-13 22:59:52.003189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.035 [2024-12-13 22:59:52.003233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:13.035 [2024-12-13 22:59:52.003247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.688 ms 00:19:13.035 [2024-12-13 22:59:52.003256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.035 [2024-12-13 22:59:52.003662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.035 [2024-12-13 22:59:52.003681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:13.035 [2024-12-13 22:59:52.003693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:19:13.035 [2024-12-13 22:59:52.003702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.035 [2024-12-13 22:59:52.042828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.035 [2024-12-13 22:59:52.043033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.035 [2024-12-13 22:59:52.043051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.035 [2024-12-13 22:59:52.043060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.035 [2024-12-13 22:59:52.043137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.035 [2024-12-13 22:59:52.043148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.035 [2024-12-13 22:59:52.043158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.035 [2024-12-13 22:59:52.043166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.035 [2024-12-13 22:59:52.043255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.035 [2024-12-13 22:59:52.043265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.035 [2024-12-13 22:59:52.043276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.035 [2024-12-13 22:59:52.043284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.035 [2024-12-13 22:59:52.043302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.035 [2024-12-13 22:59:52.043311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.035 [2024-12-13 22:59:52.043321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.035 [2024-12-13 22:59:52.043329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.035 [2024-12-13 22:59:52.127526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.035 [2024-12-13 22:59:52.127581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.035 [2024-12-13 22:59:52.127601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.035 [2024-12-13 22:59:52.127610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-12-13 22:59:52.196125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-12-13 22:59:52.196181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:13.297 [2024-12-13 22:59:52.196196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-12-13 22:59:52.196205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-12-13 22:59:52.196319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-12-13 22:59:52.196331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:13.297 [2024-12-13 22:59:52.196342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-12-13 22:59:52.196351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-12-13 22:59:52.196398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-12-13 22:59:52.196409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:13.297 [2024-12-13 22:59:52.196419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-12-13 22:59:52.196428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-12-13 22:59:52.196526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-12-13 22:59:52.196540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:13.297 [2024-12-13 22:59:52.196554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-12-13 22:59:52.196562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-12-13 22:59:52.196597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-12-13 22:59:52.196606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:13.297 [2024-12-13 22:59:52.196616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-12-13 22:59:52.196624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-12-13 22:59:52.196665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-12-13 22:59:52.196678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:13.297 [2024-12-13 22:59:52.196689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-12-13 22:59:52.196705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-12-13 22:59:52.196753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-12-13 22:59:52.196799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:13.297 [2024-12-13 22:59:52.196810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-12-13 22:59:52.196819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-12-13 22:59:52.196971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 599.068 ms, result 0 00:19:13.297 true 00:19:13.297 22:59:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77705 00:19:13.297 22:59:52 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77705 ']' 00:19:13.297 22:59:52 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77705 00:19:13.297 22:59:52 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:13.297 22:59:52 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.297 22:59:52 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77705 00:19:13.297 killing process with pid 77705 00:19:13.297 Received shutdown signal, test time was about 4.000000 seconds 00:19:13.297 00:19:13.297 Latency(us) 00:19:13.297 [2024-12-13T22:59:52.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.297 [2024-12-13T22:59:52.437Z] =================================================================================================================== 00:19:13.297 [2024-12-13T22:59:52.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.297 22:59:52 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.297 22:59:52 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.297 22:59:52 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77705' 00:19:13.297 22:59:52 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77705 00:19:13.297 22:59:52 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77705 00:19:18.593 Remove shared memory files 00:19:18.593 22:59:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:18.593 22:59:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:18.593 22:59:57 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:18.593 22:59:57 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:18.593 22:59:57 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:18.593 22:59:57 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:18.593 22:59:57 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:18.593 22:59:57 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:18.593 ************************************ 00:19:18.593 END TEST ftl_bdevperf 00:19:18.593 ************************************ 00:19:18.593 00:19:18.593 real 0m27.448s 00:19:18.593 user 0m29.773s 00:19:18.593 sys 0m1.172s 00:19:18.593 22:59:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.593 22:59:57 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:18.593 22:59:57 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:18.593 22:59:57 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:18.593 22:59:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.855 22:59:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:18.855 ************************************ 00:19:18.855 START TEST ftl_trim 00:19:18.855 ************************************ 00:19:18.855 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:18.855 * Looking for test storage... 00:19:18.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.855 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:18.855 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:18.855 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:18.855 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.855 22:59:57 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:18.855 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.855 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:18.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.855 --rc genhtml_branch_coverage=1 00:19:18.855 --rc genhtml_function_coverage=1 00:19:18.855 --rc genhtml_legend=1 00:19:18.855 --rc geninfo_all_blocks=1 00:19:18.855 --rc geninfo_unexecuted_blocks=1 00:19:18.855 00:19:18.855 ' 00:19:18.855 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.856 --rc genhtml_branch_coverage=1 00:19:18.856 --rc genhtml_function_coverage=1 00:19:18.856 --rc genhtml_legend=1 00:19:18.856 --rc geninfo_all_blocks=1 00:19:18.856 --rc geninfo_unexecuted_blocks=1 00:19:18.856 00:19:18.856 ' 00:19:18.856 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.856 --rc genhtml_branch_coverage=1 00:19:18.856 --rc genhtml_function_coverage=1 00:19:18.856 --rc genhtml_legend=1 00:19:18.856 --rc geninfo_all_blocks=1 00:19:18.856 --rc geninfo_unexecuted_blocks=1 00:19:18.856 00:19:18.856 ' 00:19:18.856 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:18.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.856 --rc genhtml_branch_coverage=1 00:19:18.856 --rc genhtml_function_coverage=1 00:19:18.856 --rc genhtml_legend=1 00:19:18.856 --rc geninfo_all_blocks=1 00:19:18.856 --rc geninfo_unexecuted_blocks=1 00:19:18.856 00:19:18.856 ' 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78064 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78064 00:19:18.856 22:59:57 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:18.856 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78064 ']' 00:19:18.856 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.856 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.856 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.856 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.856 22:59:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:19.117 [2024-12-13 22:59:58.027889] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:19.117 [2024-12-13 22:59:58.028243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78064 ] 00:19:19.117 [2024-12-13 22:59:58.194316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:19.379 [2024-12-13 22:59:58.318907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.379 [2024-12-13 22:59:58.319352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.379 [2024-12-13 22:59:58.319370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.952 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.952 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:19.953 22:59:59 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:19.953 22:59:59 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:19.953 22:59:59 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:19.953 22:59:59 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:19.953 22:59:59 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:19.953 22:59:59 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:20.526 22:59:59 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:20.526 22:59:59 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:20.526 22:59:59 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:20.526 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:20.526 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:20.526 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:20.527 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:20.527 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:20.527 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:20.527 { 00:19:20.527 "name": "nvme0n1", 00:19:20.527 "aliases": [ 00:19:20.527 "efdcce1d-9da6-4216-a95f-75f482723a11" 00:19:20.527 ], 00:19:20.527 "product_name": "NVMe disk", 00:19:20.527 "block_size": 4096, 00:19:20.527 "num_blocks": 1310720, 00:19:20.527 "uuid": "efdcce1d-9da6-4216-a95f-75f482723a11", 00:19:20.527 "numa_id": -1, 00:19:20.527 "assigned_rate_limits": { 00:19:20.527 "rw_ios_per_sec": 0, 00:19:20.527 "rw_mbytes_per_sec": 0, 00:19:20.527 "r_mbytes_per_sec": 0, 00:19:20.527 "w_mbytes_per_sec": 0 00:19:20.527 }, 00:19:20.527 "claimed": true, 00:19:20.527 "claim_type": "read_many_write_one", 00:19:20.527 "zoned": false, 00:19:20.527 "supported_io_types": { 00:19:20.527 "read": true, 00:19:20.527 "write": true, 00:19:20.527 "unmap": true, 00:19:20.527 "flush": true, 00:19:20.527 "reset": true, 00:19:20.527 "nvme_admin": true, 00:19:20.527 "nvme_io": true, 00:19:20.527 "nvme_io_md": false, 00:19:20.527 "write_zeroes": true, 00:19:20.527 "zcopy": false, 00:19:20.527 "get_zone_info": false, 00:19:20.527 "zone_management": false, 00:19:20.527 "zone_append": false, 00:19:20.527 "compare": true, 00:19:20.527 "compare_and_write": false, 00:19:20.527 "abort": true, 00:19:20.527 "seek_hole": false, 00:19:20.527 "seek_data": false, 00:19:20.527 "copy": true, 00:19:20.527 "nvme_iov_md": false 00:19:20.527 }, 00:19:20.527 "driver_specific": { 00:19:20.527 "nvme": [ 00:19:20.527 { 00:19:20.527 "pci_address": "0000:00:11.0", 00:19:20.527 "trid": { 00:19:20.527 "trtype": "PCIe", 00:19:20.527 "traddr": "0000:00:11.0" 00:19:20.527 }, 00:19:20.527 "ctrlr_data": { 00:19:20.527 "cntlid": 0, 00:19:20.527 "vendor_id": "0x1b36", 00:19:20.527 "model_number": "QEMU NVMe Ctrl", 00:19:20.527 "serial_number": "12341", 00:19:20.527 "firmware_revision": "8.0.0", 00:19:20.527 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:20.527 "oacs": { 00:19:20.527 "security": 0, 00:19:20.527 "format": 1, 00:19:20.527 "firmware": 0, 00:19:20.527 "ns_manage": 1 00:19:20.527 }, 00:19:20.527 "multi_ctrlr": false, 00:19:20.527 "ana_reporting": false 00:19:20.527 }, 00:19:20.527 "vs": { 00:19:20.527 "nvme_version": "1.4" 00:19:20.527 }, 00:19:20.527 "ns_data": { 00:19:20.527 "id": 1, 00:19:20.527 "can_share": false 00:19:20.527 } 00:19:20.527 } 00:19:20.527 ], 00:19:20.527 "mp_policy": "active_passive" 00:19:20.527 } 00:19:20.527 } 00:19:20.527 ]' 00:19:20.527 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:20.527 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:20.527 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:20.527 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:20.527 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:20.527 22:59:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:20.527 22:59:59 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:20.527 22:59:59 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:20.527 22:59:59 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:20.527 22:59:59 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:20.527 22:59:59 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:20.789 22:59:59 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=62c989e6-2dee-41a9-89cd-1b6512cd1464 00:19:20.789 22:59:59 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:20.789 22:59:59 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 62c989e6-2dee-41a9-89cd-1b6512cd1464 00:19:21.049 23:00:00 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:21.308 23:00:00 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=54bccaf5-534b-449d-ab48-d8ad67865a06 00:19:21.308 23:00:00 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 54bccaf5-534b-449d-ab48-d8ad67865a06 00:19:21.566 23:00:00 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=ab71806f-f323-4d2b-9774-e4495182f129 00:19:21.566 23:00:00 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ab71806f-f323-4d2b-9774-e4495182f129 00:19:21.566 23:00:00 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:21.566 23:00:00 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:21.566 23:00:00 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=ab71806f-f323-4d2b-9774-e4495182f129 00:19:21.566 23:00:00 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:21.566 23:00:00 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size ab71806f-f323-4d2b-9774-e4495182f129 00:19:21.566 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ab71806f-f323-4d2b-9774-e4495182f129 00:19:21.566 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:21.566 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:21.566 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:21.566 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab71806f-f323-4d2b-9774-e4495182f129 00:19:21.825 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:21.825 { 00:19:21.825 "name": "ab71806f-f323-4d2b-9774-e4495182f129", 00:19:21.825 "aliases": [ 00:19:21.825 "lvs/nvme0n1p0" 00:19:21.825 ], 00:19:21.825 "product_name": "Logical Volume", 00:19:21.825 "block_size": 4096, 00:19:21.825 "num_blocks": 26476544, 00:19:21.825 "uuid": "ab71806f-f323-4d2b-9774-e4495182f129", 00:19:21.825 "assigned_rate_limits": { 00:19:21.825 "rw_ios_per_sec": 0, 00:19:21.825 "rw_mbytes_per_sec": 0, 00:19:21.825 "r_mbytes_per_sec": 0, 00:19:21.825 "w_mbytes_per_sec": 0 00:19:21.825 }, 00:19:21.825 "claimed": false, 00:19:21.825 "zoned": false, 00:19:21.825 "supported_io_types": { 00:19:21.825 "read": true, 00:19:21.825 "write": true, 00:19:21.825 "unmap": true, 00:19:21.825 "flush": false, 00:19:21.825 "reset": true, 00:19:21.825 "nvme_admin": false, 00:19:21.825 "nvme_io": false, 00:19:21.825 "nvme_io_md": false, 00:19:21.825 "write_zeroes": true, 00:19:21.825 "zcopy": false, 00:19:21.825 "get_zone_info": false, 00:19:21.825 "zone_management": false, 00:19:21.825 "zone_append": false, 00:19:21.825 "compare": false, 00:19:21.825 "compare_and_write": false, 00:19:21.825 "abort": false, 00:19:21.825 "seek_hole": true, 00:19:21.825 "seek_data": true, 00:19:21.825 "copy": false, 00:19:21.825 "nvme_iov_md": false 00:19:21.825 }, 00:19:21.825 "driver_specific": { 00:19:21.825 "lvol": { 00:19:21.825 "lvol_store_uuid": "54bccaf5-534b-449d-ab48-d8ad67865a06", 00:19:21.825 "base_bdev": "nvme0n1", 00:19:21.825 "thin_provision": true, 00:19:21.825 "num_allocated_clusters": 0, 00:19:21.825 "snapshot": false, 00:19:21.825 "clone": false, 00:19:21.825 "esnap_clone": false 00:19:21.825 } 00:19:21.825 } 00:19:21.825 } 00:19:21.825 ]' 00:19:21.825 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:21.825 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:21.825 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:21.825 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:21.825 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:21.825 23:00:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:21.825 23:00:00 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:21.825 23:00:00 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:21.825 23:00:00 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:22.083 23:00:01 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:22.083 23:00:01 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:22.083 23:00:01 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size ab71806f-f323-4d2b-9774-e4495182f129 00:19:22.083 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ab71806f-f323-4d2b-9774-e4495182f129 00:19:22.083 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:22.083 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:22.083 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:22.083 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab71806f-f323-4d2b-9774-e4495182f129 00:19:22.341 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:22.341 { 00:19:22.341 "name": "ab71806f-f323-4d2b-9774-e4495182f129", 00:19:22.341 "aliases": [ 00:19:22.341 "lvs/nvme0n1p0" 00:19:22.341 ], 00:19:22.341 "product_name": "Logical Volume", 00:19:22.341 "block_size": 4096, 00:19:22.341 "num_blocks": 26476544, 00:19:22.341 "uuid": "ab71806f-f323-4d2b-9774-e4495182f129", 00:19:22.341 "assigned_rate_limits": { 00:19:22.341 "rw_ios_per_sec": 0, 00:19:22.341 "rw_mbytes_per_sec": 0, 00:19:22.341 "r_mbytes_per_sec": 0, 00:19:22.341 "w_mbytes_per_sec": 0 00:19:22.341 }, 00:19:22.341 "claimed": false, 00:19:22.341 "zoned": false, 00:19:22.341 "supported_io_types": { 00:19:22.341 "read": true, 00:19:22.341 "write": true, 00:19:22.341 "unmap": true, 00:19:22.341 "flush": false, 00:19:22.341 "reset": true, 00:19:22.341 "nvme_admin": false, 00:19:22.341 "nvme_io": false, 00:19:22.341 "nvme_io_md": false, 00:19:22.341 "write_zeroes": true, 00:19:22.341 "zcopy": false, 00:19:22.341 "get_zone_info": false, 00:19:22.341 "zone_management": false, 00:19:22.341 "zone_append": false, 00:19:22.341 "compare": false, 00:19:22.341 "compare_and_write": false, 00:19:22.341 "abort": false, 00:19:22.341 "seek_hole": true, 00:19:22.341 "seek_data": true, 00:19:22.341 "copy": false, 00:19:22.341 "nvme_iov_md": false 00:19:22.341 }, 00:19:22.341 "driver_specific": { 00:19:22.341 "lvol": { 00:19:22.341 "lvol_store_uuid": "54bccaf5-534b-449d-ab48-d8ad67865a06", 00:19:22.341 "base_bdev": "nvme0n1", 00:19:22.341 "thin_provision": true, 00:19:22.342 "num_allocated_clusters": 0, 00:19:22.342 "snapshot": false, 00:19:22.342 "clone": false, 00:19:22.342 "esnap_clone": false 00:19:22.342 } 00:19:22.342 } 00:19:22.342 } 00:19:22.342 ]' 00:19:22.342 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:22.342 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:22.342 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:22.342 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:22.342 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:22.342 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:22.342 23:00:01 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:22.342 23:00:01 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:22.600 23:00:01 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:22.600 23:00:01 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:22.600 23:00:01 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size ab71806f-f323-4d2b-9774-e4495182f129 00:19:22.600 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ab71806f-f323-4d2b-9774-e4495182f129 00:19:22.600 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:22.600 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:22.600 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:22.600 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab71806f-f323-4d2b-9774-e4495182f129 00:19:22.600 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:22.600 { 00:19:22.600 "name": "ab71806f-f323-4d2b-9774-e4495182f129", 00:19:22.600 "aliases": [ 00:19:22.600 "lvs/nvme0n1p0" 00:19:22.600 ], 00:19:22.600 "product_name": "Logical Volume", 00:19:22.600 "block_size": 4096, 00:19:22.600 "num_blocks": 26476544, 00:19:22.600 "uuid": "ab71806f-f323-4d2b-9774-e4495182f129", 00:19:22.600 "assigned_rate_limits": { 00:19:22.600 "rw_ios_per_sec": 0, 00:19:22.600 "rw_mbytes_per_sec": 0, 00:19:22.600 "r_mbytes_per_sec": 0, 00:19:22.600 "w_mbytes_per_sec": 0 00:19:22.600 }, 00:19:22.600 "claimed": false, 00:19:22.600 "zoned": false, 00:19:22.600 "supported_io_types": { 00:19:22.600 "read": true, 00:19:22.600 "write": true, 00:19:22.600 "unmap": true, 00:19:22.600 "flush": false, 00:19:22.600 "reset": true, 00:19:22.600 "nvme_admin": false, 00:19:22.600 "nvme_io": false, 00:19:22.600 "nvme_io_md": false, 00:19:22.600 "write_zeroes": true, 00:19:22.600 "zcopy": false, 00:19:22.600 "get_zone_info": false, 00:19:22.600 "zone_management": false, 00:19:22.600 "zone_append": false, 00:19:22.600 "compare": false, 00:19:22.600 "compare_and_write": false, 00:19:22.600 "abort": false, 00:19:22.600 "seek_hole": true, 00:19:22.600 "seek_data": true, 00:19:22.600 "copy": false, 00:19:22.600 "nvme_iov_md": false 00:19:22.600 }, 00:19:22.600 "driver_specific": { 00:19:22.600 "lvol": { 00:19:22.600 "lvol_store_uuid": "54bccaf5-534b-449d-ab48-d8ad67865a06", 00:19:22.600 "base_bdev": "nvme0n1", 00:19:22.600 "thin_provision": true, 00:19:22.600 "num_allocated_clusters": 0, 00:19:22.600 "snapshot": false, 00:19:22.600 "clone": false, 00:19:22.600 "esnap_clone": false 00:19:22.600 } 00:19:22.600 } 00:19:22.600 } 00:19:22.600 ]' 00:19:22.600 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:22.858 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:22.858 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:22.858 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:22.858 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:22.858 23:00:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:22.858 23:00:01 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:22.858 23:00:01 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ab71806f-f323-4d2b-9774-e4495182f129 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:22.858 [2024-12-13 23:00:01.978718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.858 [2024-12-13 23:00:01.978771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.858 [2024-12-13 23:00:01.978785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.858 [2024-12-13 23:00:01.978792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.858 [2024-12-13 23:00:01.981046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.858 [2024-12-13 23:00:01.981074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.858 [2024-12-13 23:00:01.981083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:19:22.858 [2024-12-13 23:00:01.981089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.858 [2024-12-13 23:00:01.981163] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.858 [2024-12-13 23:00:01.981677] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.858 [2024-12-13 23:00:01.981697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.858 [2024-12-13 23:00:01.981704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.858 [2024-12-13 23:00:01.981712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:19:22.858 [2024-12-13 23:00:01.981718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.858 [2024-12-13 23:00:01.981903] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b9143edc-1a7a-4c95-b8e5-8cccde6ada68 00:19:22.858 [2024-12-13 23:00:01.982892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.858 [2024-12-13 23:00:01.982919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:22.858 [2024-12-13 23:00:01.982926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:22.858 [2024-12-13 23:00:01.982934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.858 [2024-12-13 23:00:01.988117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.858 [2024-12-13 23:00:01.988141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.858 [2024-12-13 23:00:01.988151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.118 ms 00:19:22.858 [2024-12-13 23:00:01.988158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.858 [2024-12-13 23:00:01.988259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.858 [2024-12-13 23:00:01.988269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.858 [2024-12-13 23:00:01.988275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:22.858 [2024-12-13 23:00:01.988285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.858 [2024-12-13 23:00:01.988310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.858 [2024-12-13 23:00:01.988317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.858 [2024-12-13 23:00:01.988323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:22.858 [2024-12-13 23:00:01.988332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.858 [2024-12-13 23:00:01.988358] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:22.858 [2024-12-13 23:00:01.991263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.858 [2024-12-13 23:00:01.991288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.858 [2024-12-13 23:00:01.991297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.907 ms 00:19:22.858 [2024-12-13 23:00:01.991304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.858 [2024-12-13 23:00:01.991359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.858 [2024-12-13 23:00:01.991378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.858 [2024-12-13 23:00:01.991385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:22.858 [2024-12-13 23:00:01.991391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.858 [2024-12-13 23:00:01.991419] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:22.858 [2024-12-13 23:00:01.991524] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:22.858 [2024-12-13 23:00:01.991535] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.858 [2024-12-13 23:00:01.991544] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:22.858 [2024-12-13 23:00:01.991553] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.858 [2024-12-13 23:00:01.991560] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.858 [2024-12-13 23:00:01.991567] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:22.858 [2024-12-13 23:00:01.991572] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.858 [2024-12-13 23:00:01.991580] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:22.858 [2024-12-13 23:00:01.991587] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:22.858 [2024-12-13 23:00:01.991594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.858 [2024-12-13 23:00:01.991599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.858 [2024-12-13 23:00:01.991606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:19:22.858 [2024-12-13 23:00:01.991612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.859 [2024-12-13 23:00:01.991685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.859 [2024-12-13 23:00:01.991691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.859 [2024-12-13 23:00:01.991698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:22.859 [2024-12-13 23:00:01.991704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.859 [2024-12-13 23:00:01.991842] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.859 [2024-12-13 23:00:01.991850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.859 [2024-12-13 23:00:01.991858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.859 [2024-12-13 23:00:01.991864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.859 [2024-12-13 23:00:01.991871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.859 [2024-12-13 23:00:01.991876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.859 [2024-12-13 23:00:01.991882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:22.859 [2024-12-13 23:00:01.991887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.859 [2024-12-13 23:00:01.991893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:22.859 [2024-12-13 23:00:01.991898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.859 [2024-12-13 23:00:01.991906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.859 [2024-12-13 23:00:01.991912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:22.859 [2024-12-13 23:00:01.991918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.859 [2024-12-13 23:00:01.991923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.859 [2024-12-13 23:00:01.991929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:22.859 [2024-12-13 23:00:01.991934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.859 [2024-12-13 23:00:01.991942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.859 [2024-12-13 23:00:01.991947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:22.859 [2024-12-13 23:00:01.991953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.859 [2024-12-13 23:00:01.991958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.859 [2024-12-13 23:00:01.991963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:22.859 [2024-12-13 23:00:01.991968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.859 [2024-12-13 23:00:01.991974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.859 [2024-12-13 23:00:01.991979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:22.859 [2024-12-13 23:00:01.991986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.859 [2024-12-13 23:00:01.991991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.859 [2024-12-13 23:00:01.991998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:22.859 [2024-12-13 23:00:01.992003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.859 [2024-12-13 23:00:01.992009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.859 [2024-12-13 23:00:01.992014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:22.859 [2024-12-13 23:00:01.992021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.859 [2024-12-13 23:00:01.992026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.859 [2024-12-13 23:00:01.992033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:22.859 [2024-12-13 23:00:01.992038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.859 [2024-12-13 23:00:01.992044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.859 [2024-12-13 23:00:01.992049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:22.859 [2024-12-13 23:00:01.992057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.859 [2024-12-13 23:00:01.992061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:22.859 [2024-12-13 23:00:01.992068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:22.859 [2024-12-13 23:00:01.992072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.859 [2024-12-13 23:00:01.992078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:22.859 [2024-12-13 23:00:01.992083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:22.859 [2024-12-13 23:00:01.992089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.859 [2024-12-13 23:00:01.992094] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.859 [2024-12-13 23:00:01.992101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.859 [2024-12-13 23:00:01.992106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.859 [2024-12-13 23:00:01.992112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.859 [2024-12-13 23:00:01.992119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.859 [2024-12-13 23:00:01.992126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.859 [2024-12-13 23:00:01.992131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.859 [2024-12-13 23:00:01.992137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.859 [2024-12-13 23:00:01.992142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.859 [2024-12-13 23:00:01.992148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.859 [2024-12-13 23:00:01.992154] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.859 [2024-12-13 23:00:01.992162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.859 [2024-12-13 23:00:01.992170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:22.859 [2024-12-13 23:00:01.992177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:22.859 [2024-12-13 23:00:01.992183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:22.859 [2024-12-13 23:00:01.992190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:22.859 [2024-12-13 23:00:01.992196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:22.859 [2024-12-13 23:00:01.992202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:22.859 [2024-12-13 23:00:01.992207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:22.859 [2024-12-13 23:00:01.992215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:22.859 [2024-12-13 23:00:01.992220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:22.859 [2024-12-13 23:00:01.992228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:22.859 [2024-12-13 23:00:01.992233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:22.859 [2024-12-13 23:00:01.992240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:22.859 [2024-12-13 23:00:01.992245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:22.859 [2024-12-13 23:00:01.992252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:22.859 [2024-12-13 23:00:01.992258] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.859 [2024-12-13 23:00:01.992267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.859 [2024-12-13 23:00:01.992274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.859 [2024-12-13 23:00:01.992280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.859 [2024-12-13 23:00:01.992286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.859 [2024-12-13 23:00:01.992293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.859 [2024-12-13 23:00:01.992299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.859 [2024-12-13 23:00:01.992306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.859 [2024-12-13 23:00:01.992311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:19:22.859 [2024-12-13 23:00:01.992318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.859 [2024-12-13 23:00:01.992404] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:22.859 [2024-12-13 23:00:01.992415] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:25.380 [2024-12-13 23:00:04.440112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.380 [2024-12-13 23:00:04.440171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:25.380 [2024-12-13 23:00:04.440186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2447.697 ms 00:19:25.380 [2024-12-13 23:00:04.440196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.380 [2024-12-13 23:00:04.466008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.380 [2024-12-13 23:00:04.466176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.380 [2024-12-13 23:00:04.466194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.570 ms 00:19:25.380 [2024-12-13 23:00:04.466204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.380 [2024-12-13 23:00:04.466340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.380 [2024-12-13 23:00:04.466353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:25.380 [2024-12-13 23:00:04.466377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:25.380 [2024-12-13 23:00:04.466388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.380 [2024-12-13 23:00:04.505847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.380 [2024-12-13 23:00:04.505888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:25.380 [2024-12-13 23:00:04.505901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.422 ms 00:19:25.380 [2024-12-13 23:00:04.505912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.380 [2024-12-13 23:00:04.505993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.380 [2024-12-13 23:00:04.506006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:25.380 [2024-12-13 23:00:04.506015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:25.380 [2024-12-13 23:00:04.506024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.380 [2024-12-13 23:00:04.506352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.380 [2024-12-13 23:00:04.506372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:25.380 [2024-12-13 23:00:04.506381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:19:25.380 [2024-12-13 23:00:04.506389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.380 [2024-12-13 23:00:04.506503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.380 [2024-12-13 23:00:04.506513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:25.380 [2024-12-13 23:00:04.506537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:25.380 [2024-12-13 23:00:04.506547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.638 [2024-12-13 23:00:04.521118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.638 [2024-12-13 23:00:04.521151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:25.638 [2024-12-13 23:00:04.521161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.535 ms 00:19:25.638 [2024-12-13 23:00:04.521170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.638 [2024-12-13 23:00:04.532441] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:25.638 [2024-12-13 23:00:04.547078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.638 [2024-12-13 23:00:04.547111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:25.638 [2024-12-13 23:00:04.547123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.804 ms 00:19:25.638 [2024-12-13 23:00:04.547131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.638 [2024-12-13 23:00:04.611765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.638 [2024-12-13 23:00:04.611820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:25.638 [2024-12-13 23:00:04.611834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.557 ms 00:19:25.638 [2024-12-13 23:00:04.611842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.638 [2024-12-13 23:00:04.612064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.638 [2024-12-13 23:00:04.612076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:25.638 [2024-12-13 23:00:04.612089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:19:25.638 [2024-12-13 23:00:04.612096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.638 [2024-12-13 23:00:04.635016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.638 [2024-12-13 23:00:04.635048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:25.638 [2024-12-13 23:00:04.635060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.887 ms 00:19:25.638 [2024-12-13 23:00:04.635068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.638 [2024-12-13 23:00:04.657260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.638 [2024-12-13 23:00:04.657289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:25.638 [2024-12-13 23:00:04.657301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.131 ms 00:19:25.638 [2024-12-13 23:00:04.657308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.638 [2024-12-13 23:00:04.657900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.638 [2024-12-13 23:00:04.657918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:25.638 [2024-12-13 23:00:04.657929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:19:25.638 [2024-12-13 23:00:04.657936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.638 [2024-12-13 23:00:04.724433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.638 [2024-12-13 23:00:04.724470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:25.638 [2024-12-13 23:00:04.724487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.466 ms 00:19:25.638 [2024-12-13 23:00:04.724495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.638 [2024-12-13 23:00:04.748646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.638 [2024-12-13 23:00:04.748791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:25.638 [2024-12-13 23:00:04.748811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.053 ms 00:19:25.638 [2024-12-13 23:00:04.748820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.638 [2024-12-13 23:00:04.771663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.638 [2024-12-13 23:00:04.771694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:25.638 [2024-12-13 23:00:04.771707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.784 ms 00:19:25.638 [2024-12-13 23:00:04.771715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-13 23:00:04.794883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.896 [2024-12-13 23:00:04.795024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:25.896 [2024-12-13 23:00:04.795044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.075 ms 00:19:25.896 [2024-12-13 23:00:04.795051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-13 23:00:04.795114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.896 [2024-12-13 23:00:04.795127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:25.896 [2024-12-13 23:00:04.795139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:25.896 [2024-12-13 23:00:04.795147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-13 23:00:04.795229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.896 [2024-12-13 23:00:04.795238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:25.896 [2024-12-13 23:00:04.795248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:25.896 [2024-12-13 23:00:04.795255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-13 23:00:04.796083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:25.896 [2024-12-13 23:00:04.799064] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2817.068 ms, result 0 00:19:25.896 [2024-12-13 23:00:04.799808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:25.897 { 00:19:25.897 "name": "ftl0", 00:19:25.897 "uuid": "b9143edc-1a7a-4c95-b8e5-8cccde6ada68" 00:19:25.897 } 00:19:25.897 23:00:04 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:25.897 23:00:04 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:25.897 23:00:04 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:25.897 23:00:04 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:25.897 23:00:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:25.897 23:00:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:25.897 23:00:04 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:25.897 23:00:05 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:26.153 [ 00:19:26.153 { 00:19:26.153 "name": "ftl0", 00:19:26.153 "aliases": [ 00:19:26.153 "b9143edc-1a7a-4c95-b8e5-8cccde6ada68" 00:19:26.153 ], 00:19:26.153 "product_name": "FTL disk", 00:19:26.153 "block_size": 4096, 00:19:26.153 "num_blocks": 23592960, 00:19:26.153 "uuid": "b9143edc-1a7a-4c95-b8e5-8cccde6ada68", 00:19:26.153 "assigned_rate_limits": { 00:19:26.153 "rw_ios_per_sec": 0, 00:19:26.153 "rw_mbytes_per_sec": 0, 00:19:26.153 "r_mbytes_per_sec": 0, 00:19:26.153 "w_mbytes_per_sec": 0 00:19:26.153 }, 00:19:26.153 "claimed": false, 00:19:26.153 "zoned": false, 00:19:26.153 "supported_io_types": { 00:19:26.153 "read": true, 00:19:26.153 "write": true, 00:19:26.153 "unmap": true, 00:19:26.153 "flush": true, 00:19:26.153 "reset": false, 00:19:26.153 "nvme_admin": false, 00:19:26.153 "nvme_io": false, 00:19:26.153 "nvme_io_md": false, 00:19:26.153 "write_zeroes": true, 00:19:26.153 "zcopy": false, 00:19:26.153 "get_zone_info": false, 00:19:26.153 "zone_management": false, 00:19:26.153 "zone_append": false, 00:19:26.153 "compare": false, 00:19:26.153 "compare_and_write": false, 00:19:26.153 "abort": false, 00:19:26.153 "seek_hole": false, 00:19:26.153 "seek_data": false, 00:19:26.153 "copy": false, 00:19:26.153 "nvme_iov_md": false 00:19:26.153 }, 00:19:26.153 "driver_specific": { 00:19:26.153 "ftl": { 00:19:26.153 "base_bdev": "ab71806f-f323-4d2b-9774-e4495182f129", 00:19:26.153 "cache": "nvc0n1p0" 00:19:26.153 } 00:19:26.153 } 00:19:26.153 } 00:19:26.153 ] 00:19:26.153 23:00:05 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:26.153 23:00:05 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:26.153 23:00:05 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:26.410 23:00:05 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:26.410 23:00:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:26.673 23:00:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:26.673 { 00:19:26.673 "name": "ftl0", 00:19:26.673 "aliases": [ 00:19:26.673 "b9143edc-1a7a-4c95-b8e5-8cccde6ada68" 00:19:26.673 ], 00:19:26.673 "product_name": "FTL disk", 00:19:26.673 "block_size": 4096, 00:19:26.673 "num_blocks": 23592960, 00:19:26.673 "uuid": "b9143edc-1a7a-4c95-b8e5-8cccde6ada68", 00:19:26.673 "assigned_rate_limits": { 00:19:26.673 "rw_ios_per_sec": 0, 00:19:26.673 "rw_mbytes_per_sec": 0, 00:19:26.673 "r_mbytes_per_sec": 0, 00:19:26.673 "w_mbytes_per_sec": 0 00:19:26.673 }, 00:19:26.673 "claimed": false, 00:19:26.673 "zoned": false, 00:19:26.673 "supported_io_types": { 00:19:26.673 "read": true, 00:19:26.673 "write": true, 00:19:26.673 "unmap": true, 00:19:26.673 "flush": true, 00:19:26.673 "reset": false, 00:19:26.673 "nvme_admin": false, 00:19:26.673 "nvme_io": false, 00:19:26.673 "nvme_io_md": false, 00:19:26.673 "write_zeroes": true, 00:19:26.673 "zcopy": false, 00:19:26.673 "get_zone_info": false, 00:19:26.673 "zone_management": false, 00:19:26.673 "zone_append": false, 00:19:26.673 "compare": false, 00:19:26.673 "compare_and_write": false, 00:19:26.673 "abort": false, 00:19:26.673 "seek_hole": false, 00:19:26.673 "seek_data": false, 00:19:26.673 "copy": false, 00:19:26.673 "nvme_iov_md": false 00:19:26.673 }, 00:19:26.673 "driver_specific": { 00:19:26.673 "ftl": { 00:19:26.673 "base_bdev": "ab71806f-f323-4d2b-9774-e4495182f129", 00:19:26.673 "cache": "nvc0n1p0" 00:19:26.673 } 00:19:26.673 } 00:19:26.673 } 00:19:26.673 ]' 00:19:26.673 23:00:05 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:26.673 23:00:05 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:26.673 23:00:05 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:26.935 [2024-12-13 23:00:05.839203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.839245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:26.935 [2024-12-13 23:00:05.839259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:26.935 [2024-12-13 23:00:05.839271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.839305] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:26.935 [2024-12-13 23:00:05.841918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.842063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:26.935 [2024-12-13 23:00:05.842086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:19:26.935 [2024-12-13 23:00:05.842094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.842699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.842716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:26.935 [2024-12-13 23:00:05.842727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:19:26.935 [2024-12-13 23:00:05.842735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.846385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.846414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:26.935 [2024-12-13 23:00:05.846426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.607 ms 00:19:26.935 [2024-12-13 23:00:05.846433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.853469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.853589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:26.935 [2024-12-13 23:00:05.853607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.989 ms 00:19:26.935 [2024-12-13 23:00:05.853615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.877167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.877197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:26.935 [2024-12-13 23:00:05.877211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.465 ms 00:19:26.935 [2024-12-13 23:00:05.877218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.891945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.892064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:26.935 [2024-12-13 23:00:05.892083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.665 ms 00:19:26.935 [2024-12-13 23:00:05.892093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.892303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.892313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:26.935 [2024-12-13 23:00:05.892323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:26.935 [2024-12-13 23:00:05.892330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.915380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.915495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:26.935 [2024-12-13 23:00:05.915513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.013 ms 00:19:26.935 [2024-12-13 23:00:05.915521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.938086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.938197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:26.935 [2024-12-13 23:00:05.938217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.511 ms 00:19:26.935 [2024-12-13 23:00:05.938224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.960056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.960086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:26.935 [2024-12-13 23:00:05.960097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.761 ms 00:19:26.935 [2024-12-13 23:00:05.960105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.981904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.935 [2024-12-13 23:00:05.981934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:26.935 [2024-12-13 23:00:05.981946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.689 ms 00:19:26.935 [2024-12-13 23:00:05.981953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.935 [2024-12-13 23:00:05.982014] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:26.935 [2024-12-13 23:00:05.982028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:26.935 [2024-12-13 23:00:05.982172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:26.936 [2024-12-13 23:00:05.982903] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:26.936 [2024-12-13 23:00:05.982913] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9143edc-1a7a-4c95-b8e5-8cccde6ada68 00:19:26.937 [2024-12-13 23:00:05.982921] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:26.937 [2024-12-13 23:00:05.982929] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:26.937 [2024-12-13 23:00:05.982939] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:26.937 [2024-12-13 23:00:05.982950] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:26.937 [2024-12-13 23:00:05.982957] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:26.937 [2024-12-13 23:00:05.982967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:26.937 [2024-12-13 23:00:05.982973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:26.937 [2024-12-13 23:00:05.982981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:26.937 [2024-12-13 23:00:05.982987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:26.937 [2024-12-13 23:00:05.982996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.937 [2024-12-13 23:00:05.983003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:26.937 [2024-12-13 23:00:05.983013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:19:26.937 [2024-12-13 23:00:05.983020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.937 [2024-12-13 23:00:05.995479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.937 [2024-12-13 23:00:05.995512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:26.937 [2024-12-13 23:00:05.995526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.420 ms 00:19:26.937 [2024-12-13 23:00:05.995533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.937 [2024-12-13 23:00:05.995933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.937 [2024-12-13 23:00:05.995950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:26.937 [2024-12-13 23:00:05.995961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:19:26.937 [2024-12-13 23:00:05.995968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.937 [2024-12-13 23:00:06.039885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.937 [2024-12-13 23:00:06.039921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.937 [2024-12-13 23:00:06.039932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.937 [2024-12-13 23:00:06.039940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.937 [2024-12-13 23:00:06.040052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.937 [2024-12-13 23:00:06.040062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.937 [2024-12-13 23:00:06.040072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.937 [2024-12-13 23:00:06.040079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.937 [2024-12-13 23:00:06.040149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.937 [2024-12-13 23:00:06.040159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.937 [2024-12-13 23:00:06.040172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.937 [2024-12-13 23:00:06.040179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.937 [2024-12-13 23:00:06.040208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.937 [2024-12-13 23:00:06.040216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.937 [2024-12-13 23:00:06.040225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.937 [2024-12-13 23:00:06.040233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.195 [2024-12-13 23:00:06.121813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.195 [2024-12-13 23:00:06.121851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.195 [2024-12-13 23:00:06.121863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.195 [2024-12-13 23:00:06.121870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.195 [2024-12-13 23:00:06.185581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.195 [2024-12-13 23:00:06.185719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.195 [2024-12-13 23:00:06.185737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.195 [2024-12-13 23:00:06.185746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.195 [2024-12-13 23:00:06.185879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.195 [2024-12-13 23:00:06.185889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.195 [2024-12-13 23:00:06.185901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.195 [2024-12-13 23:00:06.185910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.195 [2024-12-13 23:00:06.185976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.195 [2024-12-13 23:00:06.185984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.195 [2024-12-13 23:00:06.185993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.195 [2024-12-13 23:00:06.186000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.195 [2024-12-13 23:00:06.186114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.195 [2024-12-13 23:00:06.186124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.195 [2024-12-13 23:00:06.186133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.195 [2024-12-13 23:00:06.186142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.195 [2024-12-13 23:00:06.186194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.195 [2024-12-13 23:00:06.186203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:27.195 [2024-12-13 23:00:06.186212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.195 [2024-12-13 23:00:06.186219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.195 [2024-12-13 23:00:06.186274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.195 [2024-12-13 23:00:06.186283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.195 [2024-12-13 23:00:06.186294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.195 [2024-12-13 23:00:06.186302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.195 [2024-12-13 23:00:06.186362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.195 [2024-12-13 23:00:06.186372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.195 [2024-12-13 23:00:06.186381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.195 [2024-12-13 23:00:06.186388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.195 [2024-12-13 23:00:06.186581] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.351 ms, result 0 00:19:27.195 true 00:19:27.195 23:00:06 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78064 00:19:27.196 23:00:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78064 ']' 00:19:27.196 23:00:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78064 00:19:27.196 23:00:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:27.196 23:00:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.196 23:00:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78064 00:19:27.196 killing process with pid 78064 00:19:27.196 23:00:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.196 23:00:06 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.196 23:00:06 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78064' 00:19:27.196 23:00:06 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78064 00:19:27.196 23:00:06 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78064 00:19:33.755 23:00:12 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:34.697 65536+0 records in 00:19:34.697 65536+0 records out 00:19:34.697 268435456 bytes (268 MB, 256 MiB) copied, 1.09149 s, 246 MB/s 00:19:34.697 23:00:13 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:34.697 [2024-12-13 23:00:13.678676] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:34.697 [2024-12-13 23:00:13.678842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78240 ] 00:19:34.983 [2024-12-13 23:00:13.842579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.983 [2024-12-13 23:00:13.959855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.266 [2024-12-13 23:00:14.258206] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.266 [2024-12-13 23:00:14.258481] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.528 [2024-12-13 23:00:14.421187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.528 [2024-12-13 23:00:14.421251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:35.528 [2024-12-13 23:00:14.421266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:35.528 [2024-12-13 23:00:14.421275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.528 [2024-12-13 23:00:14.424382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.528 [2024-12-13 23:00:14.424580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.528 [2024-12-13 23:00:14.424601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:19:35.528 [2024-12-13 23:00:14.424610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.528 [2024-12-13 23:00:14.424858] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:35.528 [2024-12-13 23:00:14.425749] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:35.528 [2024-12-13 23:00:14.425819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.528 [2024-12-13 23:00:14.425827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.528 [2024-12-13 23:00:14.425837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:19:35.528 [2024-12-13 23:00:14.425845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.528 [2024-12-13 23:00:14.427532] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:35.528 [2024-12-13 23:00:14.441797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.529 [2024-12-13 23:00:14.441977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:35.529 [2024-12-13 23:00:14.442000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.267 ms 00:19:35.529 [2024-12-13 23:00:14.442010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.529 [2024-12-13 23:00:14.442413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.529 [2024-12-13 23:00:14.442449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:35.529 [2024-12-13 23:00:14.442461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:35.529 [2024-12-13 23:00:14.442471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.529 [2024-12-13 23:00:14.450673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.529 [2024-12-13 23:00:14.450723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.529 [2024-12-13 23:00:14.450735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.152 ms 00:19:35.529 [2024-12-13 23:00:14.450743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.529 [2024-12-13 23:00:14.450872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.529 [2024-12-13 23:00:14.450885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.529 [2024-12-13 23:00:14.450894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:35.529 [2024-12-13 23:00:14.450903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.529 [2024-12-13 23:00:14.450934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.529 [2024-12-13 23:00:14.450943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:35.529 [2024-12-13 23:00:14.450952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:35.529 [2024-12-13 23:00:14.450960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.529 [2024-12-13 23:00:14.450983] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:35.529 [2024-12-13 23:00:14.455082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.529 [2024-12-13 23:00:14.455121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.529 [2024-12-13 23:00:14.455133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.107 ms 00:19:35.529 [2024-12-13 23:00:14.455140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.529 [2024-12-13 23:00:14.455216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.529 [2024-12-13 23:00:14.455227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:35.529 [2024-12-13 23:00:14.455237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:35.529 [2024-12-13 23:00:14.455245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.529 [2024-12-13 23:00:14.455272] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:35.529 [2024-12-13 23:00:14.455297] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:35.529 [2024-12-13 23:00:14.455333] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:35.529 [2024-12-13 23:00:14.455350] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:35.529 [2024-12-13 23:00:14.455466] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:35.529 [2024-12-13 23:00:14.455476] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:35.529 [2024-12-13 23:00:14.455487] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:35.529 [2024-12-13 23:00:14.455500] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:35.529 [2024-12-13 23:00:14.455511] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:35.529 [2024-12-13 23:00:14.455519] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:35.529 [2024-12-13 23:00:14.455527] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:35.529 [2024-12-13 23:00:14.455534] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:35.529 [2024-12-13 23:00:14.455541] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:35.529 [2024-12-13 23:00:14.455551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.529 [2024-12-13 23:00:14.455558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:35.529 [2024-12-13 23:00:14.455566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:19:35.529 [2024-12-13 23:00:14.455573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.529 [2024-12-13 23:00:14.455661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.529 [2024-12-13 23:00:14.455673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:35.529 [2024-12-13 23:00:14.455681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:35.529 [2024-12-13 23:00:14.455688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.529 [2024-12-13 23:00:14.455823] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:35.529 [2024-12-13 23:00:14.455835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:35.529 [2024-12-13 23:00:14.455844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.529 [2024-12-13 23:00:14.455852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.529 [2024-12-13 23:00:14.455860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:35.529 [2024-12-13 23:00:14.455867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:35.529 [2024-12-13 23:00:14.455875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:35.529 [2024-12-13 23:00:14.455884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:35.529 [2024-12-13 23:00:14.455891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:35.529 [2024-12-13 23:00:14.455898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.529 [2024-12-13 23:00:14.455905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:35.529 [2024-12-13 23:00:14.455919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:35.529 [2024-12-13 23:00:14.455926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.529 [2024-12-13 23:00:14.455979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:35.529 [2024-12-13 23:00:14.455987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:35.529 [2024-12-13 23:00:14.455994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.529 [2024-12-13 23:00:14.456004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:35.529 [2024-12-13 23:00:14.456011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:35.529 [2024-12-13 23:00:14.456018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.529 [2024-12-13 23:00:14.456026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:35.529 [2024-12-13 23:00:14.456034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:35.529 [2024-12-13 23:00:14.456041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.529 [2024-12-13 23:00:14.456048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:35.529 [2024-12-13 23:00:14.456055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:35.529 [2024-12-13 23:00:14.456063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.529 [2024-12-13 23:00:14.456070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:35.529 [2024-12-13 23:00:14.456077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:35.529 [2024-12-13 23:00:14.456083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.529 [2024-12-13 23:00:14.456090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:35.529 [2024-12-13 23:00:14.456098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:35.529 [2024-12-13 23:00:14.456105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.529 [2024-12-13 23:00:14.456112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:35.529 [2024-12-13 23:00:14.456119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:35.529 [2024-12-13 23:00:14.456125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.529 [2024-12-13 23:00:14.456132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:35.529 [2024-12-13 23:00:14.456139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:35.529 [2024-12-13 23:00:14.456147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.529 [2024-12-13 23:00:14.456154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:35.529 [2024-12-13 23:00:14.456161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:35.529 [2024-12-13 23:00:14.456168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.529 [2024-12-13 23:00:14.456174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:35.529 [2024-12-13 23:00:14.456181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:35.529 [2024-12-13 23:00:14.456188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.529 [2024-12-13 23:00:14.456194] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:35.529 [2024-12-13 23:00:14.456203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:35.529 [2024-12-13 23:00:14.456213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.529 [2024-12-13 23:00:14.456220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.529 [2024-12-13 23:00:14.456228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:35.529 [2024-12-13 23:00:14.456237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:35.529 [2024-12-13 23:00:14.456244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:35.529 [2024-12-13 23:00:14.456251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:35.529 [2024-12-13 23:00:14.456257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:35.529 [2024-12-13 23:00:14.456264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:35.529 [2024-12-13 23:00:14.456273] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:35.529 [2024-12-13 23:00:14.456282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.529 [2024-12-13 23:00:14.456292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:35.529 [2024-12-13 23:00:14.456300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:35.530 [2024-12-13 23:00:14.456306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:35.530 [2024-12-13 23:00:14.456314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:35.530 [2024-12-13 23:00:14.456322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:35.530 [2024-12-13 23:00:14.456329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:35.530 [2024-12-13 23:00:14.456337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:35.530 [2024-12-13 23:00:14.456344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:35.530 [2024-12-13 23:00:14.456351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:35.530 [2024-12-13 23:00:14.456358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:35.530 [2024-12-13 23:00:14.456365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:35.530 [2024-12-13 23:00:14.456372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:35.530 [2024-12-13 23:00:14.456379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:35.530 [2024-12-13 23:00:14.456387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:35.530 [2024-12-13 23:00:14.456394] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:35.530 [2024-12-13 23:00:14.456402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.530 [2024-12-13 23:00:14.456410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:35.530 [2024-12-13 23:00:14.456418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:35.530 [2024-12-13 23:00:14.456425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:35.530 [2024-12-13 23:00:14.456433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:35.530 [2024-12-13 23:00:14.456441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.456451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:35.530 [2024-12-13 23:00:14.456459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:19:35.530 [2024-12-13 23:00:14.456466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.488310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.488359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.530 [2024-12-13 23:00:14.488371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.787 ms 00:19:35.530 [2024-12-13 23:00:14.488379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.488516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.488528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:35.530 [2024-12-13 23:00:14.488544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:35.530 [2024-12-13 23:00:14.488552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.536670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.536724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.530 [2024-12-13 23:00:14.536741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.095 ms 00:19:35.530 [2024-12-13 23:00:14.536749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.536884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.536897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.530 [2024-12-13 23:00:14.536907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.530 [2024-12-13 23:00:14.536916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.537418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.537449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.530 [2024-12-13 23:00:14.537460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:19:35.530 [2024-12-13 23:00:14.537476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.537632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.537642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.530 [2024-12-13 23:00:14.537651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:19:35.530 [2024-12-13 23:00:14.537659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.553833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.553876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.530 [2024-12-13 23:00:14.553888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.151 ms 00:19:35.530 [2024-12-13 23:00:14.553896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.568131] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:35.530 [2024-12-13 23:00:14.568181] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:35.530 [2024-12-13 23:00:14.568195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.568203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:35.530 [2024-12-13 23:00:14.568213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.179 ms 00:19:35.530 [2024-12-13 23:00:14.568222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.594457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.594509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:35.530 [2024-12-13 23:00:14.594523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.138 ms 00:19:35.530 [2024-12-13 23:00:14.594531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.607711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.607753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:35.530 [2024-12-13 23:00:14.607780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.084 ms 00:19:35.530 [2024-12-13 23:00:14.607797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.620581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.620625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:35.530 [2024-12-13 23:00:14.620636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.696 ms 00:19:35.530 [2024-12-13 23:00:14.620644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.530 [2024-12-13 23:00:14.621316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.530 [2024-12-13 23:00:14.621342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.530 [2024-12-13 23:00:14.621353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:19:35.530 [2024-12-13 23:00:14.621361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.792 [2024-12-13 23:00:14.687833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.792 [2024-12-13 23:00:14.687898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:35.792 [2024-12-13 23:00:14.687913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.446 ms 00:19:35.792 [2024-12-13 23:00:14.687923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.792 [2024-12-13 23:00:14.699176] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:35.792 [2024-12-13 23:00:14.718576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.792 [2024-12-13 23:00:14.718630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.792 [2024-12-13 23:00:14.718644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.539 ms 00:19:35.792 [2024-12-13 23:00:14.718653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.792 [2024-12-13 23:00:14.718801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.792 [2024-12-13 23:00:14.718815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:35.792 [2024-12-13 23:00:14.718826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:35.792 [2024-12-13 23:00:14.718834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.792 [2024-12-13 23:00:14.718892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.792 [2024-12-13 23:00:14.718901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.792 [2024-12-13 23:00:14.718910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:35.792 [2024-12-13 23:00:14.718918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.792 [2024-12-13 23:00:14.718952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.792 [2024-12-13 23:00:14.718964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.792 [2024-12-13 23:00:14.718972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:35.792 [2024-12-13 23:00:14.718981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.792 [2024-12-13 23:00:14.719021] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:35.792 [2024-12-13 23:00:14.719032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.792 [2024-12-13 23:00:14.719040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:35.792 [2024-12-13 23:00:14.719048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:35.792 [2024-12-13 23:00:14.719057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.792 [2024-12-13 23:00:14.745344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.792 [2024-12-13 23:00:14.745555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.792 [2024-12-13 23:00:14.745577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.264 ms 00:19:35.792 [2024-12-13 23:00:14.745587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.792 [2024-12-13 23:00:14.745720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.792 [2024-12-13 23:00:14.745732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.792 [2024-12-13 23:00:14.745742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:35.792 [2024-12-13 23:00:14.745752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.792 [2024-12-13 23:00:14.746841] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.792 [2024-12-13 23:00:14.750359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 325.298 ms, result 0 00:19:35.792 [2024-12-13 23:00:14.751611] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.792 [2024-12-13 23:00:14.765118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.731  [2024-12-13T23:00:16.810Z] Copying: 16/256 [MB] (16 MBps) [2024-12-13T23:00:18.193Z] Copying: 33/256 [MB] (16 MBps) [2024-12-13T23:00:19.133Z] Copying: 52/256 [MB] (19 MBps) [2024-12-13T23:00:20.074Z] Copying: 78/256 [MB] (25 MBps) [2024-12-13T23:00:21.016Z] Copying: 101/256 [MB] (22 MBps) [2024-12-13T23:00:21.957Z] Copying: 126/256 [MB] (25 MBps) [2024-12-13T23:00:22.924Z] Copying: 138/256 [MB] (12 MBps) [2024-12-13T23:00:23.876Z] Copying: 150/256 [MB] (11 MBps) [2024-12-13T23:00:24.819Z] Copying: 164/256 [MB] (14 MBps) [2024-12-13T23:00:26.209Z] Copying: 177/256 [MB] (13 MBps) [2024-12-13T23:00:26.784Z] Copying: 192/256 [MB] (14 MBps) [2024-12-13T23:00:28.171Z] Copying: 204/256 [MB] (12 MBps) [2024-12-13T23:00:29.125Z] Copying: 219160/262144 [kB] (9992 kBps) [2024-12-13T23:00:30.077Z] Copying: 229336/262144 [kB] (10176 kBps) [2024-12-13T23:00:31.022Z] Copying: 239340/262144 [kB] (10004 kBps) [2024-12-13T23:00:31.965Z] Copying: 249256/262144 [kB] (9916 kBps) [2024-12-13T23:00:32.228Z] Copying: 253/256 [MB] (10 MBps) [2024-12-13T23:00:32.228Z] Copying: 256/256 [MB] (average 14 MBps)[2024-12-13 23:00:32.031607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:53.088 [2024-12-13 23:00:32.041964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.042018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.088 [2024-12-13 23:00:32.042035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:53.088 [2024-12-13 23:00:32.042043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.042077] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:53.088 [2024-12-13 23:00:32.045108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.045152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.088 [2024-12-13 23:00:32.045164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:19:53.088 [2024-12-13 23:00:32.045172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.048198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.048249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.088 [2024-12-13 23:00:32.048261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.994 ms 00:19:53.088 [2024-12-13 23:00:32.048270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.056872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.056930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.088 [2024-12-13 23:00:32.056941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.583 ms 00:19:53.088 [2024-12-13 23:00:32.056949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.063835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.063878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:53.088 [2024-12-13 23:00:32.063890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:19:53.088 [2024-12-13 23:00:32.063897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.089676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.089726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.088 [2024-12-13 23:00:32.089738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.725 ms 00:19:53.088 [2024-12-13 23:00:32.089746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.106602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.106659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.088 [2024-12-13 23:00:32.106676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.774 ms 00:19:53.088 [2024-12-13 23:00:32.106684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.106864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.106878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.088 [2024-12-13 23:00:32.106889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:53.088 [2024-12-13 23:00:32.106906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.133154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.133215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:53.088 [2024-12-13 23:00:32.133227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.230 ms 00:19:53.088 [2024-12-13 23:00:32.133234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.158871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.158918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:53.088 [2024-12-13 23:00:32.158930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.574 ms 00:19:53.088 [2024-12-13 23:00:32.158936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.185404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.185469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:53.088 [2024-12-13 23:00:32.185486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.166 ms 00:19:53.088 [2024-12-13 23:00:32.185495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.211125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.088 [2024-12-13 23:00:32.211174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:53.088 [2024-12-13 23:00:32.211188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.526 ms 00:19:53.088 [2024-12-13 23:00:32.211196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.088 [2024-12-13 23:00:32.211248] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:53.088 [2024-12-13 23:00:32.211266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.211996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.212004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.212012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:53.089 [2024-12-13 23:00:32.212020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:53.090 [2024-12-13 23:00:32.212028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:53.090 [2024-12-13 23:00:32.212036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:53.090 [2024-12-13 23:00:32.212043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:53.090 [2024-12-13 23:00:32.212061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:53.090 [2024-12-13 23:00:32.212069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:53.090 [2024-12-13 23:00:32.212076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:53.090 [2024-12-13 23:00:32.212084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:53.090 [2024-12-13 23:00:32.212092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:53.090 [2024-12-13 23:00:32.212099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:53.090 [2024-12-13 23:00:32.212116] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:53.090 [2024-12-13 23:00:32.212126] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9143edc-1a7a-4c95-b8e5-8cccde6ada68 00:19:53.090 [2024-12-13 23:00:32.212134] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:53.090 [2024-12-13 23:00:32.212142] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:53.090 [2024-12-13 23:00:32.212150] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:53.090 [2024-12-13 23:00:32.212158] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:53.090 [2024-12-13 23:00:32.212167] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:53.090 [2024-12-13 23:00:32.212176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:53.090 [2024-12-13 23:00:32.212183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:53.090 [2024-12-13 23:00:32.212190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:53.090 [2024-12-13 23:00:32.212196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:53.090 [2024-12-13 23:00:32.212203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.090 [2024-12-13 23:00:32.212216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:53.090 [2024-12-13 23:00:32.212226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:19:53.090 [2024-12-13 23:00:32.212233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.226246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.352 [2024-12-13 23:00:32.226290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:53.352 [2024-12-13 23:00:32.226302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.979 ms 00:19:53.352 [2024-12-13 23:00:32.226310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.226729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.352 [2024-12-13 23:00:32.226747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:53.352 [2024-12-13 23:00:32.226775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:19:53.352 [2024-12-13 23:00:32.226784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.266217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.266269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.352 [2024-12-13 23:00:32.266281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.266289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.266411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.266422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.352 [2024-12-13 23:00:32.266430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.266438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.266497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.266507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.352 [2024-12-13 23:00:32.266516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.266524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.266544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.266557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.352 [2024-12-13 23:00:32.266565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.266573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.352812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.352871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.352 [2024-12-13 23:00:32.352885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.352894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.423502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.423566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.352 [2024-12-13 23:00:32.423580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.423589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.423674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.423684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.352 [2024-12-13 23:00:32.423694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.423703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.423737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.423746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.352 [2024-12-13 23:00:32.423785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.423807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.423907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.423918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.352 [2024-12-13 23:00:32.423927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.423935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.423971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.423981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:53.352 [2024-12-13 23:00:32.423990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.424001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.424050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.424059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.352 [2024-12-13 23:00:32.424069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.424076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.424128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.352 [2024-12-13 23:00:32.424138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.352 [2024-12-13 23:00:32.424150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.352 [2024-12-13 23:00:32.424159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.352 [2024-12-13 23:00:32.424319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.358 ms, result 0 00:19:54.742 00:19:54.742 00:19:54.742 23:00:33 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:54.742 23:00:33 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78453 00:19:54.742 23:00:33 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78453 00:19:54.742 23:00:33 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78453 ']' 00:19:54.742 23:00:33 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.742 23:00:33 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.742 23:00:33 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.742 23:00:33 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.742 23:00:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:54.742 [2024-12-13 23:00:33.634172] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:54.742 [2024-12-13 23:00:33.634323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78453 ] 00:19:54.742 [2024-12-13 23:00:33.798344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.003 [2024-12-13 23:00:33.921344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.575 23:00:34 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.575 23:00:34 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:55.575 23:00:34 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:55.836 [2024-12-13 23:00:34.840817] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.836 [2024-12-13 23:00:34.840904] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:56.096 [2024-12-13 23:00:35.019816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.096 [2024-12-13 23:00:35.019882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:56.096 [2024-12-13 23:00:35.019900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:56.096 [2024-12-13 23:00:35.019909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.096 [2024-12-13 23:00:35.022925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.096 [2024-12-13 23:00:35.022977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.096 [2024-12-13 23:00:35.022989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:19:56.096 [2024-12-13 23:00:35.022998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.096 [2024-12-13 23:00:35.023123] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:56.096 [2024-12-13 23:00:35.023875] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:56.096 [2024-12-13 23:00:35.023905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.096 [2024-12-13 23:00:35.023914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.096 [2024-12-13 23:00:35.023926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:19:56.096 [2024-12-13 23:00:35.023933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.096 [2024-12-13 23:00:35.025733] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:56.096 [2024-12-13 23:00:35.037767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.096 [2024-12-13 23:00:35.037824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:56.096 [2024-12-13 23:00:35.037836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.040 ms 00:19:56.096 [2024-12-13 23:00:35.037844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.096 [2024-12-13 23:00:35.037958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.096 [2024-12-13 23:00:35.037970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:56.096 [2024-12-13 23:00:35.037978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:56.096 [2024-12-13 23:00:35.037988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.096 [2024-12-13 23:00:35.045476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.096 [2024-12-13 23:00:35.045522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.096 [2024-12-13 23:00:35.045533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.442 ms 00:19:56.096 [2024-12-13 23:00:35.045541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.096 [2024-12-13 23:00:35.045643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.096 [2024-12-13 23:00:35.045653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.096 [2024-12-13 23:00:35.045660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:56.096 [2024-12-13 23:00:35.045671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.096 [2024-12-13 23:00:35.045694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.096 [2024-12-13 23:00:35.045703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:56.096 [2024-12-13 23:00:35.045709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:56.096 [2024-12-13 23:00:35.045716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.096 [2024-12-13 23:00:35.045734] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:56.096 [2024-12-13 23:00:35.049061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.096 [2024-12-13 23:00:35.049099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.096 [2024-12-13 23:00:35.049110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:19:56.096 [2024-12-13 23:00:35.049117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.096 [2024-12-13 23:00:35.049172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.096 [2024-12-13 23:00:35.049180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:56.096 [2024-12-13 23:00:35.049190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:56.096 [2024-12-13 23:00:35.049199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.096 [2024-12-13 23:00:35.049217] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:56.096 [2024-12-13 23:00:35.049235] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:56.096 [2024-12-13 23:00:35.049274] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:56.096 [2024-12-13 23:00:35.049288] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:56.096 [2024-12-13 23:00:35.049373] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:56.096 [2024-12-13 23:00:35.049382] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:56.096 [2024-12-13 23:00:35.049394] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:56.096 [2024-12-13 23:00:35.049402] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:56.096 [2024-12-13 23:00:35.049411] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:56.096 [2024-12-13 23:00:35.049417] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:56.096 [2024-12-13 23:00:35.049425] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:56.097 [2024-12-13 23:00:35.049431] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:56.097 [2024-12-13 23:00:35.049440] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:56.097 [2024-12-13 23:00:35.049447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.097 [2024-12-13 23:00:35.049454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:56.097 [2024-12-13 23:00:35.049460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:19:56.097 [2024-12-13 23:00:35.049467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.097 [2024-12-13 23:00:35.049536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.097 [2024-12-13 23:00:35.049545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:56.097 [2024-12-13 23:00:35.049551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:56.097 [2024-12-13 23:00:35.049558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.097 [2024-12-13 23:00:35.049652] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:56.097 [2024-12-13 23:00:35.049662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:56.097 [2024-12-13 23:00:35.049669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.097 [2024-12-13 23:00:35.049676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:56.097 [2024-12-13 23:00:35.049691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:56.097 [2024-12-13 23:00:35.049705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:56.097 [2024-12-13 23:00:35.049710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.097 [2024-12-13 23:00:35.049723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:56.097 [2024-12-13 23:00:35.049731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:56.097 [2024-12-13 23:00:35.049736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.097 [2024-12-13 23:00:35.049742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:56.097 [2024-12-13 23:00:35.049747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:56.097 [2024-12-13 23:00:35.049768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:56.097 [2024-12-13 23:00:35.049783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:56.097 [2024-12-13 23:00:35.049795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:56.097 [2024-12-13 23:00:35.049808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.097 [2024-12-13 23:00:35.049820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:56.097 [2024-12-13 23:00:35.049829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.097 [2024-12-13 23:00:35.049841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:56.097 [2024-12-13 23:00:35.049847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.097 [2024-12-13 23:00:35.049860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:56.097 [2024-12-13 23:00:35.049868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.097 [2024-12-13 23:00:35.049880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:56.097 [2024-12-13 23:00:35.049885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.097 [2024-12-13 23:00:35.049897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:56.097 [2024-12-13 23:00:35.049921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:56.097 [2024-12-13 23:00:35.049926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.097 [2024-12-13 23:00:35.049932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:56.097 [2024-12-13 23:00:35.049938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:56.097 [2024-12-13 23:00:35.049946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:56.097 [2024-12-13 23:00:35.049958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:56.097 [2024-12-13 23:00:35.049963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049969] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:56.097 [2024-12-13 23:00:35.049977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:56.097 [2024-12-13 23:00:35.049984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.097 [2024-12-13 23:00:35.049990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.097 [2024-12-13 23:00:35.049998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:56.097 [2024-12-13 23:00:35.050003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:56.097 [2024-12-13 23:00:35.050012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:56.097 [2024-12-13 23:00:35.050017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:56.097 [2024-12-13 23:00:35.050024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:56.097 [2024-12-13 23:00:35.050029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:56.097 [2024-12-13 23:00:35.050037] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:56.097 [2024-12-13 23:00:35.050044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.097 [2024-12-13 23:00:35.050056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:56.097 [2024-12-13 23:00:35.050062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:56.097 [2024-12-13 23:00:35.050069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:56.097 [2024-12-13 23:00:35.050082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:56.097 [2024-12-13 23:00:35.050089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:56.097 [2024-12-13 23:00:35.050094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:56.097 [2024-12-13 23:00:35.050102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:56.097 [2024-12-13 23:00:35.050108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:56.097 [2024-12-13 23:00:35.050115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:56.097 [2024-12-13 23:00:35.050120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:56.097 [2024-12-13 23:00:35.050127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:56.097 [2024-12-13 23:00:35.050133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:56.097 [2024-12-13 23:00:35.050140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:56.097 [2024-12-13 23:00:35.050146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:56.097 [2024-12-13 23:00:35.050153] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:56.097 [2024-12-13 23:00:35.050160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.097 [2024-12-13 23:00:35.050169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:56.097 [2024-12-13 23:00:35.050174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:56.097 [2024-12-13 23:00:35.050181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:56.097 [2024-12-13 23:00:35.050187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:56.097 [2024-12-13 23:00:35.050195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.097 [2024-12-13 23:00:35.050201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:56.097 [2024-12-13 23:00:35.050209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:19:56.097 [2024-12-13 23:00:35.050216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.097 [2024-12-13 23:00:35.075232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.097 [2024-12-13 23:00:35.075272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.097 [2024-12-13 23:00:35.075284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.967 ms 00:19:56.098 [2024-12-13 23:00:35.075293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.075394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.075403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:56.098 [2024-12-13 23:00:35.075412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:56.098 [2024-12-13 23:00:35.075418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.101635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.101671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.098 [2024-12-13 23:00:35.101681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.197 ms 00:19:56.098 [2024-12-13 23:00:35.101687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.101738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.101746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.098 [2024-12-13 23:00:35.101765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:56.098 [2024-12-13 23:00:35.101772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.102120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.102143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.098 [2024-12-13 23:00:35.102154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:19:56.098 [2024-12-13 23:00:35.102160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.102271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.102278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.098 [2024-12-13 23:00:35.102287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:56.098 [2024-12-13 23:00:35.102293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.114960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.114989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.098 [2024-12-13 23:00:35.114998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.646 ms 00:19:56.098 [2024-12-13 23:00:35.115004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.138494] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:56.098 [2024-12-13 23:00:35.138534] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:56.098 [2024-12-13 23:00:35.138547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.138554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:56.098 [2024-12-13 23:00:35.138563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.447 ms 00:19:56.098 [2024-12-13 23:00:35.138574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.157773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.157819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:56.098 [2024-12-13 23:00:35.157831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.129 ms 00:19:56.098 [2024-12-13 23:00:35.157837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.167021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.167050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:56.098 [2024-12-13 23:00:35.167062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.118 ms 00:19:56.098 [2024-12-13 23:00:35.167067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.175834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.175861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:56.098 [2024-12-13 23:00:35.175871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.718 ms 00:19:56.098 [2024-12-13 23:00:35.175876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.176345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.176363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:56.098 [2024-12-13 23:00:35.176371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:19:56.098 [2024-12-13 23:00:35.176377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.221086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.098 [2024-12-13 23:00:35.221124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:56.098 [2024-12-13 23:00:35.221136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.689 ms 00:19:56.098 [2024-12-13 23:00:35.221143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.098 [2024-12-13 23:00:35.228923] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:56.357 [2024-12-13 23:00:35.240393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.357 [2024-12-13 23:00:35.240426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:56.357 [2024-12-13 23:00:35.240437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.178 ms 00:19:56.357 [2024-12-13 23:00:35.240445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.357 [2024-12-13 23:00:35.240501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.357 [2024-12-13 23:00:35.240511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:56.357 [2024-12-13 23:00:35.240518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:56.357 [2024-12-13 23:00:35.240525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.357 [2024-12-13 23:00:35.240562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.357 [2024-12-13 23:00:35.240570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:56.357 [2024-12-13 23:00:35.240576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:56.357 [2024-12-13 23:00:35.240585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.357 [2024-12-13 23:00:35.240602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.357 [2024-12-13 23:00:35.240610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:56.357 [2024-12-13 23:00:35.240616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:56.357 [2024-12-13 23:00:35.240624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.357 [2024-12-13 23:00:35.240650] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:56.357 [2024-12-13 23:00:35.240659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.357 [2024-12-13 23:00:35.240667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:56.357 [2024-12-13 23:00:35.240673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:56.357 [2024-12-13 23:00:35.240678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.357 [2024-12-13 23:00:35.258374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.357 [2024-12-13 23:00:35.258403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:56.357 [2024-12-13 23:00:35.258413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.676 ms 00:19:56.357 [2024-12-13 23:00:35.258419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.357 [2024-12-13 23:00:35.258489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.357 [2024-12-13 23:00:35.258498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:56.357 [2024-12-13 23:00:35.258506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:56.357 [2024-12-13 23:00:35.258513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.357 [2024-12-13 23:00:35.259471] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.357 [2024-12-13 23:00:35.261778] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 239.482 ms, result 0 00:19:56.357 [2024-12-13 23:00:35.262720] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:56.357 Some configs were skipped because the RPC state that can call them passed over. 00:19:56.357 23:00:35 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:56.357 [2024-12-13 23:00:35.483297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.357 [2024-12-13 23:00:35.483336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:56.357 [2024-12-13 23:00:35.483345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:19:56.357 [2024-12-13 23:00:35.483353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.357 [2024-12-13 23:00:35.483378] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.694 ms, result 0 00:19:56.357 true 00:19:56.616 23:00:35 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:56.616 [2024-12-13 23:00:35.678942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.616 [2024-12-13 23:00:35.678977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:56.616 [2024-12-13 23:00:35.678987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:19:56.616 [2024-12-13 23:00:35.678993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.616 [2024-12-13 23:00:35.679020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.140 ms, result 0 00:19:56.616 true 00:19:56.616 23:00:35 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78453 00:19:56.616 23:00:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78453 ']' 00:19:56.616 23:00:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78453 00:19:56.616 23:00:35 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:56.616 23:00:35 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.616 23:00:35 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78453 00:19:56.616 23:00:35 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.616 23:00:35 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.616 killing process with pid 78453 00:19:56.616 23:00:35 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78453' 00:19:56.616 23:00:35 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78453 00:19:56.616 23:00:35 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78453 00:19:57.183 [2024-12-13 23:00:36.245045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.245096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:57.183 [2024-12-13 23:00:36.245107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:57.183 [2024-12-13 23:00:36.245114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.245133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:57.183 [2024-12-13 23:00:36.247294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.247321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:57.183 [2024-12-13 23:00:36.247333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:19:57.183 [2024-12-13 23:00:36.247339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.247564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.247572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:57.183 [2024-12-13 23:00:36.247580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:19:57.183 [2024-12-13 23:00:36.247585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.250817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.250843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:57.183 [2024-12-13 23:00:36.250853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.215 ms 00:19:57.183 [2024-12-13 23:00:36.250859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.256028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.256054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:57.183 [2024-12-13 23:00:36.256063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.140 ms 00:19:57.183 [2024-12-13 23:00:36.256070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.263614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.263648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:57.183 [2024-12-13 23:00:36.263658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.486 ms 00:19:57.183 [2024-12-13 23:00:36.263664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.270242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.270271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:57.183 [2024-12-13 23:00:36.270280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.546 ms 00:19:57.183 [2024-12-13 23:00:36.270287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.270398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.270406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:57.183 [2024-12-13 23:00:36.270414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:57.183 [2024-12-13 23:00:36.270419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.278564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.278589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:57.183 [2024-12-13 23:00:36.278598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.128 ms 00:19:57.183 [2024-12-13 23:00:36.278604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.285993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.286018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:57.183 [2024-12-13 23:00:36.286029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.357 ms 00:19:57.183 [2024-12-13 23:00:36.286034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.293135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.293167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:57.183 [2024-12-13 23:00:36.293176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.068 ms 00:19:57.183 [2024-12-13 23:00:36.293181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.300288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.183 [2024-12-13 23:00:36.300313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:57.183 [2024-12-13 23:00:36.300322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.054 ms 00:19:57.183 [2024-12-13 23:00:36.300327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.183 [2024-12-13 23:00:36.300355] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:57.183 [2024-12-13 23:00:36.300366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:57.183 [2024-12-13 23:00:36.300465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.300993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.301001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.301008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.301015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:57.184 [2024-12-13 23:00:36.301031] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:57.184 [2024-12-13 23:00:36.301041] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9143edc-1a7a-4c95-b8e5-8cccde6ada68 00:19:57.184 [2024-12-13 23:00:36.301049] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:57.184 [2024-12-13 23:00:36.301056] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:57.184 [2024-12-13 23:00:36.301062] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:57.184 [2024-12-13 23:00:36.301069] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:57.184 [2024-12-13 23:00:36.301074] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:57.184 [2024-12-13 23:00:36.301082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:57.184 [2024-12-13 23:00:36.301088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:57.184 [2024-12-13 23:00:36.301094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:57.185 [2024-12-13 23:00:36.301099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:57.185 [2024-12-13 23:00:36.301105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.185 [2024-12-13 23:00:36.301111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:57.185 [2024-12-13 23:00:36.301118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:19:57.185 [2024-12-13 23:00:36.301123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.185 [2024-12-13 23:00:36.310961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.185 [2024-12-13 23:00:36.310986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:57.185 [2024-12-13 23:00:36.310997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.818 ms 00:19:57.185 [2024-12-13 23:00:36.311003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.185 [2024-12-13 23:00:36.311288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.185 [2024-12-13 23:00:36.311296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:57.185 [2024-12-13 23:00:36.311306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:57.185 [2024-12-13 23:00:36.311311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.443 [2024-12-13 23:00:36.346380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.443 [2024-12-13 23:00:36.346407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.443 [2024-12-13 23:00:36.346418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.443 [2024-12-13 23:00:36.346424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.443 [2024-12-13 23:00:36.346500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.443 [2024-12-13 23:00:36.346508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.443 [2024-12-13 23:00:36.346517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.443 [2024-12-13 23:00:36.346523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.443 [2024-12-13 23:00:36.346558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.443 [2024-12-13 23:00:36.346566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.443 [2024-12-13 23:00:36.346575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.443 [2024-12-13 23:00:36.346581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.443 [2024-12-13 23:00:36.346596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.443 [2024-12-13 23:00:36.346602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.443 [2024-12-13 23:00:36.346608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.443 [2024-12-13 23:00:36.346615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.443 [2024-12-13 23:00:36.407039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.443 [2024-12-13 23:00:36.407075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.443 [2024-12-13 23:00:36.407085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.444 [2024-12-13 23:00:36.407091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-12-13 23:00:36.455696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.444 [2024-12-13 23:00:36.455731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.444 [2024-12-13 23:00:36.455741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.444 [2024-12-13 23:00:36.455749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-12-13 23:00:36.455829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.444 [2024-12-13 23:00:36.455837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.444 [2024-12-13 23:00:36.455847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.444 [2024-12-13 23:00:36.455853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-12-13 23:00:36.455878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.444 [2024-12-13 23:00:36.455885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.444 [2024-12-13 23:00:36.455892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.444 [2024-12-13 23:00:36.455897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-12-13 23:00:36.455973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.444 [2024-12-13 23:00:36.455980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.444 [2024-12-13 23:00:36.455988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.444 [2024-12-13 23:00:36.455994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-12-13 23:00:36.456019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.444 [2024-12-13 23:00:36.456026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:57.444 [2024-12-13 23:00:36.456033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.444 [2024-12-13 23:00:36.456039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-12-13 23:00:36.456070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.444 [2024-12-13 23:00:36.456076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.444 [2024-12-13 23:00:36.456085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.444 [2024-12-13 23:00:36.456091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-12-13 23:00:36.456124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.444 [2024-12-13 23:00:36.456132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.444 [2024-12-13 23:00:36.456139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.444 [2024-12-13 23:00:36.456145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-12-13 23:00:36.456249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 211.184 ms, result 0 00:19:58.015 23:00:36 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:58.015 23:00:36 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:58.015 [2024-12-13 23:00:37.046623] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:58.015 [2024-12-13 23:00:37.046749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78500 ] 00:19:58.291 [2024-12-13 23:00:37.205030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.291 [2024-12-13 23:00:37.289155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.567 [2024-12-13 23:00:37.500324] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.567 [2024-12-13 23:00:37.500377] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.567 [2024-12-13 23:00:37.652811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.652859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:58.567 [2024-12-13 23:00:37.652872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:58.567 [2024-12-13 23:00:37.652880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.655541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.655576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.567 [2024-12-13 23:00:37.655586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:19:58.567 [2024-12-13 23:00:37.655593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.655674] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:58.567 [2024-12-13 23:00:37.656367] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:58.567 [2024-12-13 23:00:37.656389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.656397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.567 [2024-12-13 23:00:37.656405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:19:58.567 [2024-12-13 23:00:37.656412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.657559] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:58.567 [2024-12-13 23:00:37.670324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.670359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:58.567 [2024-12-13 23:00:37.670370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.766 ms 00:19:58.567 [2024-12-13 23:00:37.670377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.670463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.670474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:58.567 [2024-12-13 23:00:37.670483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:58.567 [2024-12-13 23:00:37.670490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.675675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.675706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.567 [2024-12-13 23:00:37.675715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.142 ms 00:19:58.567 [2024-12-13 23:00:37.675723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.675842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.675853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.567 [2024-12-13 23:00:37.675861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:58.567 [2024-12-13 23:00:37.675869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.675895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.675902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:58.567 [2024-12-13 23:00:37.675910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:58.567 [2024-12-13 23:00:37.675917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.675935] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:58.567 [2024-12-13 23:00:37.679231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.679260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.567 [2024-12-13 23:00:37.679270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.300 ms 00:19:58.567 [2024-12-13 23:00:37.679277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.679314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.679323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:58.567 [2024-12-13 23:00:37.679331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:58.567 [2024-12-13 23:00:37.679337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.679363] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:58.567 [2024-12-13 23:00:37.679382] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:58.567 [2024-12-13 23:00:37.679416] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:58.567 [2024-12-13 23:00:37.679430] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:58.567 [2024-12-13 23:00:37.679533] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:58.567 [2024-12-13 23:00:37.679544] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:58.567 [2024-12-13 23:00:37.679554] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:58.567 [2024-12-13 23:00:37.679566] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:58.567 [2024-12-13 23:00:37.679574] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:58.567 [2024-12-13 23:00:37.679582] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:58.567 [2024-12-13 23:00:37.679589] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:58.567 [2024-12-13 23:00:37.679597] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:58.567 [2024-12-13 23:00:37.679604] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:58.567 [2024-12-13 23:00:37.679611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.679618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:58.567 [2024-12-13 23:00:37.679626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:19:58.567 [2024-12-13 23:00:37.679633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.679721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.567 [2024-12-13 23:00:37.679731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:58.567 [2024-12-13 23:00:37.679738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:58.567 [2024-12-13 23:00:37.679745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.567 [2024-12-13 23:00:37.679875] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:58.567 [2024-12-13 23:00:37.679887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:58.567 [2024-12-13 23:00:37.679895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.567 [2024-12-13 23:00:37.679903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.567 [2024-12-13 23:00:37.679910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:58.567 [2024-12-13 23:00:37.679917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:58.567 [2024-12-13 23:00:37.679923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:58.567 [2024-12-13 23:00:37.679930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:58.567 [2024-12-13 23:00:37.679937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:58.567 [2024-12-13 23:00:37.679943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.567 [2024-12-13 23:00:37.679949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:58.567 [2024-12-13 23:00:37.679963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:58.567 [2024-12-13 23:00:37.679969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.567 [2024-12-13 23:00:37.679976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:58.567 [2024-12-13 23:00:37.679982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:58.567 [2024-12-13 23:00:37.679988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.567 [2024-12-13 23:00:37.679996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:58.567 [2024-12-13 23:00:37.680003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:58.567 [2024-12-13 23:00:37.680009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.567 [2024-12-13 23:00:37.680015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:58.567 [2024-12-13 23:00:37.680021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:58.567 [2024-12-13 23:00:37.680028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.567 [2024-12-13 23:00:37.680034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:58.567 [2024-12-13 23:00:37.680040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:58.567 [2024-12-13 23:00:37.680046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.567 [2024-12-13 23:00:37.680053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:58.567 [2024-12-13 23:00:37.680059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:58.567 [2024-12-13 23:00:37.680065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.567 [2024-12-13 23:00:37.680072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:58.567 [2024-12-13 23:00:37.680078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:58.567 [2024-12-13 23:00:37.680084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.567 [2024-12-13 23:00:37.680090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:58.567 [2024-12-13 23:00:37.680097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:58.567 [2024-12-13 23:00:37.680103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.568 [2024-12-13 23:00:37.680109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:58.568 [2024-12-13 23:00:37.680115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:58.568 [2024-12-13 23:00:37.680122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.568 [2024-12-13 23:00:37.680128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:58.568 [2024-12-13 23:00:37.680134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:58.568 [2024-12-13 23:00:37.680141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.568 [2024-12-13 23:00:37.680147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:58.568 [2024-12-13 23:00:37.680153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:58.568 [2024-12-13 23:00:37.680159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.568 [2024-12-13 23:00:37.680166] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:58.568 [2024-12-13 23:00:37.680173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:58.568 [2024-12-13 23:00:37.680182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.568 [2024-12-13 23:00:37.680189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.568 [2024-12-13 23:00:37.680196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:58.568 [2024-12-13 23:00:37.680204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:58.568 [2024-12-13 23:00:37.680210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:58.568 [2024-12-13 23:00:37.680216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:58.568 [2024-12-13 23:00:37.680222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:58.568 [2024-12-13 23:00:37.680229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:58.568 [2024-12-13 23:00:37.680237] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:58.568 [2024-12-13 23:00:37.680246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.568 [2024-12-13 23:00:37.680254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:58.568 [2024-12-13 23:00:37.680261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:58.568 [2024-12-13 23:00:37.680268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:58.568 [2024-12-13 23:00:37.680274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:58.568 [2024-12-13 23:00:37.680281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:58.568 [2024-12-13 23:00:37.680288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:58.568 [2024-12-13 23:00:37.680294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:58.568 [2024-12-13 23:00:37.680301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:58.568 [2024-12-13 23:00:37.680307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:58.568 [2024-12-13 23:00:37.680314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:58.568 [2024-12-13 23:00:37.680322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:58.568 [2024-12-13 23:00:37.680329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:58.568 [2024-12-13 23:00:37.680335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:58.568 [2024-12-13 23:00:37.680343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:58.568 [2024-12-13 23:00:37.680350] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:58.568 [2024-12-13 23:00:37.680359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.568 [2024-12-13 23:00:37.680367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:58.568 [2024-12-13 23:00:37.680374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:58.568 [2024-12-13 23:00:37.680381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:58.568 [2024-12-13 23:00:37.680388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:58.568 [2024-12-13 23:00:37.680395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.568 [2024-12-13 23:00:37.680405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:58.568 [2024-12-13 23:00:37.680412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:19:58.568 [2024-12-13 23:00:37.680419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.831 [2024-12-13 23:00:37.706998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.831 [2024-12-13 23:00:37.707033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.831 [2024-12-13 23:00:37.707044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.520 ms 00:19:58.831 [2024-12-13 23:00:37.707051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.831 [2024-12-13 23:00:37.707177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.831 [2024-12-13 23:00:37.707187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:58.831 [2024-12-13 23:00:37.707196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:58.831 [2024-12-13 23:00:37.707203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.831 [2024-12-13 23:00:37.747036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.831 [2024-12-13 23:00:37.747083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.831 [2024-12-13 23:00:37.747098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.812 ms 00:19:58.831 [2024-12-13 23:00:37.747107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.831 [2024-12-13 23:00:37.747208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.831 [2024-12-13 23:00:37.747220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.831 [2024-12-13 23:00:37.747228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:58.831 [2024-12-13 23:00:37.747236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.831 [2024-12-13 23:00:37.747625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.831 [2024-12-13 23:00:37.747655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.831 [2024-12-13 23:00:37.747664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:19:58.831 [2024-12-13 23:00:37.747676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.831 [2024-12-13 23:00:37.747848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.831 [2024-12-13 23:00:37.747871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.831 [2024-12-13 23:00:37.747880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:19:58.831 [2024-12-13 23:00:37.747887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.831 [2024-12-13 23:00:37.762163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.831 [2024-12-13 23:00:37.762202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.831 [2024-12-13 23:00:37.762213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.255 ms 00:19:58.831 [2024-12-13 23:00:37.762220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.831 [2024-12-13 23:00:37.775632] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:58.831 [2024-12-13 23:00:37.775676] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:58.831 [2024-12-13 23:00:37.775688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.831 [2024-12-13 23:00:37.775696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:58.831 [2024-12-13 23:00:37.775705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.363 ms 00:19:58.831 [2024-12-13 23:00:37.775712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.831 [2024-12-13 23:00:37.800731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.831 [2024-12-13 23:00:37.800783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:58.832 [2024-12-13 23:00:37.800794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.903 ms 00:19:58.832 [2024-12-13 23:00:37.800803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.813912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.832 [2024-12-13 23:00:37.813958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:58.832 [2024-12-13 23:00:37.813969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.015 ms 00:19:58.832 [2024-12-13 23:00:37.813976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.826798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.832 [2024-12-13 23:00:37.826850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:58.832 [2024-12-13 23:00:37.826862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.731 ms 00:19:58.832 [2024-12-13 23:00:37.826869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.827529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.832 [2024-12-13 23:00:37.827558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:58.832 [2024-12-13 23:00:37.827569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:19:58.832 [2024-12-13 23:00:37.827577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.895305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.832 [2024-12-13 23:00:37.895371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:58.832 [2024-12-13 23:00:37.895388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.700 ms 00:19:58.832 [2024-12-13 23:00:37.895397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.907880] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:58.832 [2024-12-13 23:00:37.928245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.832 [2024-12-13 23:00:37.928299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:58.832 [2024-12-13 23:00:37.928312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.722 ms 00:19:58.832 [2024-12-13 23:00:37.928327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.928434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.832 [2024-12-13 23:00:37.928446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:58.832 [2024-12-13 23:00:37.928456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:58.832 [2024-12-13 23:00:37.928465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.928525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.832 [2024-12-13 23:00:37.928535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:58.832 [2024-12-13 23:00:37.928544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:58.832 [2024-12-13 23:00:37.928557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.928590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.832 [2024-12-13 23:00:37.928599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:58.832 [2024-12-13 23:00:37.928608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:58.832 [2024-12-13 23:00:37.928616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.928656] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:58.832 [2024-12-13 23:00:37.928668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.832 [2024-12-13 23:00:37.928676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:58.832 [2024-12-13 23:00:37.928684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:58.832 [2024-12-13 23:00:37.928692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.955987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.832 [2024-12-13 23:00:37.956063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:58.832 [2024-12-13 23:00:37.956079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.273 ms 00:19:58.832 [2024-12-13 23:00:37.956088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.956256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.832 [2024-12-13 23:00:37.956271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:58.832 [2024-12-13 23:00:37.956282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:58.832 [2024-12-13 23:00:37.956290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.832 [2024-12-13 23:00:37.957481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:58.832 [2024-12-13 23:00:37.961172] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.332 ms, result 0 00:19:58.832 [2024-12-13 23:00:37.962423] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:59.097 [2024-12-13 23:00:37.976241] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.044  [2024-12-13T23:00:40.130Z] Copying: 13/256 [MB] (13 MBps) [2024-12-13T23:00:41.075Z] Copying: 23/256 [MB] (10 MBps) [2024-12-13T23:00:42.020Z] Copying: 35/256 [MB] (11 MBps) [2024-12-13T23:00:43.405Z] Copying: 49/256 [MB] (14 MBps) [2024-12-13T23:00:44.347Z] Copying: 67/256 [MB] (17 MBps) [2024-12-13T23:00:45.291Z] Copying: 87/256 [MB] (19 MBps) [2024-12-13T23:00:46.232Z] Copying: 108/256 [MB] (21 MBps) [2024-12-13T23:00:47.171Z] Copying: 124/256 [MB] (16 MBps) [2024-12-13T23:00:48.117Z] Copying: 146/256 [MB] (22 MBps) [2024-12-13T23:00:49.065Z] Copying: 159/256 [MB] (12 MBps) [2024-12-13T23:00:50.012Z] Copying: 178/256 [MB] (19 MBps) [2024-12-13T23:00:51.398Z] Copying: 191/256 [MB] (12 MBps) [2024-12-13T23:00:52.341Z] Copying: 207/256 [MB] (16 MBps) [2024-12-13T23:00:53.286Z] Copying: 228/256 [MB] (21 MBps) [2024-12-13T23:00:54.240Z] Copying: 240/256 [MB] (11 MBps) [2024-12-13T23:00:54.815Z] Copying: 250/256 [MB] (10 MBps) [2024-12-13T23:00:54.815Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-13 23:00:54.535115] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.675 [2024-12-13 23:00:54.545470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.545526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.675 [2024-12-13 23:00:54.545547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:15.675 [2024-12-13 23:00:54.545557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.545581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.675 [2024-12-13 23:00:54.548645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.548684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.675 [2024-12-13 23:00:54.548696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.048 ms 00:20:15.675 [2024-12-13 23:00:54.548704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.548985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.548997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.675 [2024-12-13 23:00:54.549007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:20:15.675 [2024-12-13 23:00:54.549015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.552708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.552731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.675 [2024-12-13 23:00:54.552741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.674 ms 00:20:15.675 [2024-12-13 23:00:54.552750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.560034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.560069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.675 [2024-12-13 23:00:54.560079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.257 ms 00:20:15.675 [2024-12-13 23:00:54.560088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.586001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.586052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.675 [2024-12-13 23:00:54.586064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.833 ms 00:20:15.675 [2024-12-13 23:00:54.586071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.602332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.602380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.675 [2024-12-13 23:00:54.602401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.210 ms 00:20:15.675 [2024-12-13 23:00:54.602409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.602567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.602580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.675 [2024-12-13 23:00:54.602599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:15.675 [2024-12-13 23:00:54.602607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.628301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.628344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.675 [2024-12-13 23:00:54.628354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.677 ms 00:20:15.675 [2024-12-13 23:00:54.628361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.653464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.653508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.675 [2024-12-13 23:00:54.653519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.053 ms 00:20:15.675 [2024-12-13 23:00:54.653525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.678650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.678692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.675 [2024-12-13 23:00:54.678704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.075 ms 00:20:15.675 [2024-12-13 23:00:54.678711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.703545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.675 [2024-12-13 23:00:54.703601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.675 [2024-12-13 23:00:54.703613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.737 ms 00:20:15.675 [2024-12-13 23:00:54.703620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.675 [2024-12-13 23:00:54.703680] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.675 [2024-12-13 23:00:54.703697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.675 [2024-12-13 23:00:54.703929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.703936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.703943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.703953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.703961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.703968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.703976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.703983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.703991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.703998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.676 [2024-12-13 23:00:54.704507] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.676 [2024-12-13 23:00:54.704516] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9143edc-1a7a-4c95-b8e5-8cccde6ada68 00:20:15.676 [2024-12-13 23:00:54.704525] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.676 [2024-12-13 23:00:54.704533] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.676 [2024-12-13 23:00:54.704540] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.676 [2024-12-13 23:00:54.704548] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.676 [2024-12-13 23:00:54.704555] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.676 [2024-12-13 23:00:54.704563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.676 [2024-12-13 23:00:54.704574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.676 [2024-12-13 23:00:54.704581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.676 [2024-12-13 23:00:54.704588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.676 [2024-12-13 23:00:54.704595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.676 [2024-12-13 23:00:54.704603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.676 [2024-12-13 23:00:54.704612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:20:15.676 [2024-12-13 23:00:54.704619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.676 [2024-12-13 23:00:54.718247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.676 [2024-12-13 23:00:54.718289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.676 [2024-12-13 23:00:54.718300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.607 ms 00:20:15.676 [2024-12-13 23:00:54.718308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.676 [2024-12-13 23:00:54.718715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.676 [2024-12-13 23:00:54.718742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.676 [2024-12-13 23:00:54.718753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:20:15.677 [2024-12-13 23:00:54.718777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.677 [2024-12-13 23:00:54.757624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.677 [2024-12-13 23:00:54.757677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.677 [2024-12-13 23:00:54.757688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.677 [2024-12-13 23:00:54.757704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.677 [2024-12-13 23:00:54.757800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.677 [2024-12-13 23:00:54.757812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.677 [2024-12-13 23:00:54.757822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.677 [2024-12-13 23:00:54.757832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.677 [2024-12-13 23:00:54.757883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.677 [2024-12-13 23:00:54.757893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.677 [2024-12-13 23:00:54.757903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.677 [2024-12-13 23:00:54.757912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.677 [2024-12-13 23:00:54.757933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.677 [2024-12-13 23:00:54.757942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.677 [2024-12-13 23:00:54.757952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.677 [2024-12-13 23:00:54.757960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.936 [2024-12-13 23:00:54.835233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.936 [2024-12-13 23:00:54.835279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.936 [2024-12-13 23:00:54.835289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.936 [2024-12-13 23:00:54.835297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.936 [2024-12-13 23:00:54.886901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.936 [2024-12-13 23:00:54.886940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.936 [2024-12-13 23:00:54.886949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.936 [2024-12-13 23:00:54.886956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.936 [2024-12-13 23:00:54.886999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.936 [2024-12-13 23:00:54.887006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.936 [2024-12-13 23:00:54.887013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.936 [2024-12-13 23:00:54.887019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.936 [2024-12-13 23:00:54.887044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.936 [2024-12-13 23:00:54.887054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.936 [2024-12-13 23:00:54.887061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.936 [2024-12-13 23:00:54.887067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.936 [2024-12-13 23:00:54.887135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.936 [2024-12-13 23:00:54.887143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.936 [2024-12-13 23:00:54.887150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.936 [2024-12-13 23:00:54.887156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.936 [2024-12-13 23:00:54.887181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.936 [2024-12-13 23:00:54.887189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.936 [2024-12-13 23:00:54.887197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.936 [2024-12-13 23:00:54.887203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.936 [2024-12-13 23:00:54.887235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.936 [2024-12-13 23:00:54.887241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.936 [2024-12-13 23:00:54.887248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.936 [2024-12-13 23:00:54.887254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.936 [2024-12-13 23:00:54.887288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.936 [2024-12-13 23:00:54.887298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.936 [2024-12-13 23:00:54.887305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.936 [2024-12-13 23:00:54.887310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.936 [2024-12-13 23:00:54.887421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.964 ms, result 0 00:20:16.504 00:20:16.504 00:20:16.504 23:00:55 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:16.504 23:00:55 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:17.076 23:00:55 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:17.076 [2024-12-13 23:00:56.071368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:17.076 [2024-12-13 23:00:56.071513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78700 ] 00:20:17.334 [2024-12-13 23:00:56.230464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.334 [2024-12-13 23:00:56.314969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.595 [2024-12-13 23:00:56.523684] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.595 [2024-12-13 23:00:56.523737] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.595 [2024-12-13 23:00:56.675013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.675051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:17.595 [2024-12-13 23:00:56.675061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:17.595 [2024-12-13 23:00:56.675067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.677119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.677149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.595 [2024-12-13 23:00:56.677157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.040 ms 00:20:17.595 [2024-12-13 23:00:56.677163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.677218] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:17.595 [2024-12-13 23:00:56.677720] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:17.595 [2024-12-13 23:00:56.677741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.677747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.595 [2024-12-13 23:00:56.677754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:20:17.595 [2024-12-13 23:00:56.677772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.678699] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:17.595 [2024-12-13 23:00:56.688163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.688190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:17.595 [2024-12-13 23:00:56.688198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.465 ms 00:20:17.595 [2024-12-13 23:00:56.688203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.688271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.688280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:17.595 [2024-12-13 23:00:56.688287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:17.595 [2024-12-13 23:00:56.688292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.692600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.692624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.595 [2024-12-13 23:00:56.692631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.279 ms 00:20:17.595 [2024-12-13 23:00:56.692637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.692704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.692712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.595 [2024-12-13 23:00:56.692718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:17.595 [2024-12-13 23:00:56.692723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.692740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.692746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.595 [2024-12-13 23:00:56.692752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:17.595 [2024-12-13 23:00:56.692768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.692784] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:17.595 [2024-12-13 23:00:56.695316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.695339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.595 [2024-12-13 23:00:56.695346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:20:17.595 [2024-12-13 23:00:56.695352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.695379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.695386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.595 [2024-12-13 23:00:56.695392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.595 [2024-12-13 23:00:56.695398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.695416] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.595 [2024-12-13 23:00:56.695433] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.595 [2024-12-13 23:00:56.695465] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.595 [2024-12-13 23:00:56.695480] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.595 [2024-12-13 23:00:56.695578] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.595 [2024-12-13 23:00:56.695586] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.595 [2024-12-13 23:00:56.695594] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.595 [2024-12-13 23:00:56.695606] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.595 [2024-12-13 23:00:56.695615] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.595 [2024-12-13 23:00:56.695622] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:17.595 [2024-12-13 23:00:56.695627] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.595 [2024-12-13 23:00:56.695633] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.595 [2024-12-13 23:00:56.695642] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.595 [2024-12-13 23:00:56.695648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.695653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.595 [2024-12-13 23:00:56.695659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:20:17.595 [2024-12-13 23:00:56.695666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.695735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.595 [2024-12-13 23:00:56.695744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.595 [2024-12-13 23:00:56.695749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:17.595 [2024-12-13 23:00:56.695766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.595 [2024-12-13 23:00:56.695857] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.595 [2024-12-13 23:00:56.695866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.595 [2024-12-13 23:00:56.695876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.595 [2024-12-13 23:00:56.695882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.595 [2024-12-13 23:00:56.695887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.595 [2024-12-13 23:00:56.695893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.595 [2024-12-13 23:00:56.695898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:17.595 [2024-12-13 23:00:56.695903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.595 [2024-12-13 23:00:56.695909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:17.595 [2024-12-13 23:00:56.695913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.595 [2024-12-13 23:00:56.695920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.595 [2024-12-13 23:00:56.695929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:17.595 [2024-12-13 23:00:56.695934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.595 [2024-12-13 23:00:56.695939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.595 [2024-12-13 23:00:56.695944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:17.596 [2024-12-13 23:00:56.695949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.596 [2024-12-13 23:00:56.695954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.596 [2024-12-13 23:00:56.695959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:17.596 [2024-12-13 23:00:56.695964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.596 [2024-12-13 23:00:56.695969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.596 [2024-12-13 23:00:56.695977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:17.596 [2024-12-13 23:00:56.695982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.596 [2024-12-13 23:00:56.695987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.596 [2024-12-13 23:00:56.695992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:17.596 [2024-12-13 23:00:56.695999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.596 [2024-12-13 23:00:56.696005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.596 [2024-12-13 23:00:56.696012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:17.596 [2024-12-13 23:00:56.696017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.596 [2024-12-13 23:00:56.696023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.596 [2024-12-13 23:00:56.696030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:17.596 [2024-12-13 23:00:56.696035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.596 [2024-12-13 23:00:56.696040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.596 [2024-12-13 23:00:56.696045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:17.596 [2024-12-13 23:00:56.696050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.596 [2024-12-13 23:00:56.696057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.596 [2024-12-13 23:00:56.696062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:17.596 [2024-12-13 23:00:56.696067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.596 [2024-12-13 23:00:56.696072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.596 [2024-12-13 23:00:56.696078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:17.596 [2024-12-13 23:00:56.696083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.596 [2024-12-13 23:00:56.696087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.596 [2024-12-13 23:00:56.696092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:17.596 [2024-12-13 23:00:56.696098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.596 [2024-12-13 23:00:56.696103] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.596 [2024-12-13 23:00:56.696109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.596 [2024-12-13 23:00:56.696116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.596 [2024-12-13 23:00:56.696121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.596 [2024-12-13 23:00:56.696126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.596 [2024-12-13 23:00:56.696131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.596 [2024-12-13 23:00:56.696137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.596 [2024-12-13 23:00:56.696142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.596 [2024-12-13 23:00:56.696147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.596 [2024-12-13 23:00:56.696152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.596 [2024-12-13 23:00:56.696158] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.596 [2024-12-13 23:00:56.696165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.596 [2024-12-13 23:00:56.696171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.596 [2024-12-13 23:00:56.696176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.596 [2024-12-13 23:00:56.696182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.596 [2024-12-13 23:00:56.696187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.596 [2024-12-13 23:00:56.696192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.596 [2024-12-13 23:00:56.696198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.596 [2024-12-13 23:00:56.696203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.596 [2024-12-13 23:00:56.696208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.596 [2024-12-13 23:00:56.696214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.596 [2024-12-13 23:00:56.696219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.596 [2024-12-13 23:00:56.696224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.596 [2024-12-13 23:00:56.696230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.596 [2024-12-13 23:00:56.696235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.596 [2024-12-13 23:00:56.696240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.596 [2024-12-13 23:00:56.696245] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.596 [2024-12-13 23:00:56.696252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.596 [2024-12-13 23:00:56.696258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.596 [2024-12-13 23:00:56.696263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.596 [2024-12-13 23:00:56.696269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.596 [2024-12-13 23:00:56.696274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.596 [2024-12-13 23:00:56.696280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.596 [2024-12-13 23:00:56.696288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.596 [2024-12-13 23:00:56.696293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:20:17.596 [2024-12-13 23:00:56.696299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.596 [2024-12-13 23:00:56.716867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.596 [2024-12-13 23:00:56.716894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.596 [2024-12-13 23:00:56.716901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.527 ms 00:20:17.596 [2024-12-13 23:00:56.716907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.596 [2024-12-13 23:00:56.716999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.596 [2024-12-13 23:00:56.717007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.596 [2024-12-13 23:00:56.717013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:17.596 [2024-12-13 23:00:56.717018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.755509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.855 [2024-12-13 23:00:56.755543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.855 [2024-12-13 23:00:56.755554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.475 ms 00:20:17.855 [2024-12-13 23:00:56.755560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.755621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.855 [2024-12-13 23:00:56.755630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.855 [2024-12-13 23:00:56.755637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:17.855 [2024-12-13 23:00:56.755643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.755965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.855 [2024-12-13 23:00:56.755984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.855 [2024-12-13 23:00:56.755991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:20:17.855 [2024-12-13 23:00:56.756002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.756103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.855 [2024-12-13 23:00:56.756116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.855 [2024-12-13 23:00:56.756123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:17.855 [2024-12-13 23:00:56.756129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.766783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.855 [2024-12-13 23:00:56.766809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.855 [2024-12-13 23:00:56.766816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.639 ms 00:20:17.855 [2024-12-13 23:00:56.766822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.776688] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:17.855 [2024-12-13 23:00:56.776715] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.855 [2024-12-13 23:00:56.776724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.855 [2024-12-13 23:00:56.776730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.855 [2024-12-13 23:00:56.776737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.813 ms 00:20:17.855 [2024-12-13 23:00:56.776743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.794857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.855 [2024-12-13 23:00:56.794885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.855 [2024-12-13 23:00:56.794893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.060 ms 00:20:17.855 [2024-12-13 23:00:56.794900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.803583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.855 [2024-12-13 23:00:56.803610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:17.855 [2024-12-13 23:00:56.803617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.641 ms 00:20:17.855 [2024-12-13 23:00:56.803623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.811997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.855 [2024-12-13 23:00:56.812020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:17.855 [2024-12-13 23:00:56.812027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.334 ms 00:20:17.855 [2024-12-13 23:00:56.812033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.812487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.855 [2024-12-13 23:00:56.812511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.855 [2024-12-13 23:00:56.812518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:20:17.855 [2024-12-13 23:00:56.812524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.855987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.855 [2024-12-13 23:00:56.856023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:17.855 [2024-12-13 23:00:56.856033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.445 ms 00:20:17.855 [2024-12-13 23:00:56.856039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.855 [2024-12-13 23:00:56.863765] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:17.856 [2024-12-13 23:00:56.875113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.856 [2024-12-13 23:00:56.875144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.856 [2024-12-13 23:00:56.875153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.009 ms 00:20:17.856 [2024-12-13 23:00:56.875163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.856 [2024-12-13 23:00:56.875232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.856 [2024-12-13 23:00:56.875240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:17.856 [2024-12-13 23:00:56.875246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.856 [2024-12-13 23:00:56.875252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.856 [2024-12-13 23:00:56.875288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.856 [2024-12-13 23:00:56.875295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.856 [2024-12-13 23:00:56.875301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:17.856 [2024-12-13 23:00:56.875309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.856 [2024-12-13 23:00:56.875333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.856 [2024-12-13 23:00:56.875340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.856 [2024-12-13 23:00:56.875347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:17.856 [2024-12-13 23:00:56.875353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.856 [2024-12-13 23:00:56.875376] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:17.856 [2024-12-13 23:00:56.875383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.856 [2024-12-13 23:00:56.875389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:17.856 [2024-12-13 23:00:56.875395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.856 [2024-12-13 23:00:56.875401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.856 [2024-12-13 23:00:56.893698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.856 [2024-12-13 23:00:56.893730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.856 [2024-12-13 23:00:56.893739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.283 ms 00:20:17.856 [2024-12-13 23:00:56.893745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.856 [2024-12-13 23:00:56.893822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.856 [2024-12-13 23:00:56.893831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.856 [2024-12-13 23:00:56.893838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:17.856 [2024-12-13 23:00:56.893843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.856 [2024-12-13 23:00:56.894453] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.856 [2024-12-13 23:00:56.896853] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 219.228 ms, result 0 00:20:17.856 [2024-12-13 23:00:56.897793] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.856 [2024-12-13 23:00:56.912530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.116  [2024-12-13T23:00:57.256Z] Copying: 4096/4096 [kB] (average 48 MBps)[2024-12-13 23:00:56.998021] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.116 [2024-12-13 23:00:57.004640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.004667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:18.116 [2024-12-13 23:00:57.004678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:18.116 [2024-12-13 23:00:57.004684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.004699] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:18.116 [2024-12-13 23:00:57.006762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.006785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:18.116 [2024-12-13 23:00:57.006793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:20:18.116 [2024-12-13 23:00:57.006800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.008349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.008384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:18.116 [2024-12-13 23:00:57.008393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:20:18.116 [2024-12-13 23:00:57.008399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.011501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.011524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:18.116 [2024-12-13 23:00:57.011531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:20:18.116 [2024-12-13 23:00:57.011537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.016718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.016742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:18.116 [2024-12-13 23:00:57.016749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.162 ms 00:20:18.116 [2024-12-13 23:00:57.016762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.033811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.033838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:18.116 [2024-12-13 23:00:57.033845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.018 ms 00:20:18.116 [2024-12-13 23:00:57.033851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.045036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.045064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:18.116 [2024-12-13 23:00:57.045072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.151 ms 00:20:18.116 [2024-12-13 23:00:57.045078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.045170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.045178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:18.116 [2024-12-13 23:00:57.045190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:18.116 [2024-12-13 23:00:57.045195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.062879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.062903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:18.116 [2024-12-13 23:00:57.062910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.672 ms 00:20:18.116 [2024-12-13 23:00:57.062915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.080529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.080554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:18.116 [2024-12-13 23:00:57.080561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.588 ms 00:20:18.116 [2024-12-13 23:00:57.080566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.097767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.097793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:18.116 [2024-12-13 23:00:57.097800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.163 ms 00:20:18.116 [2024-12-13 23:00:57.097805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.114727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.116 [2024-12-13 23:00:57.114751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:18.116 [2024-12-13 23:00:57.114764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.876 ms 00:20:18.116 [2024-12-13 23:00:57.114770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.116 [2024-12-13 23:00:57.114795] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:18.116 [2024-12-13 23:00:57.114807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:18.116 [2024-12-13 23:00:57.114880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.114995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:18.117 [2024-12-13 23:00:57.115367] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:18.117 [2024-12-13 23:00:57.115373] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9143edc-1a7a-4c95-b8e5-8cccde6ada68 00:20:18.117 [2024-12-13 23:00:57.115379] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:18.117 [2024-12-13 23:00:57.115385] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:18.117 [2024-12-13 23:00:57.115391] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:18.118 [2024-12-13 23:00:57.115397] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:18.118 [2024-12-13 23:00:57.115402] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:18.118 [2024-12-13 23:00:57.115408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:18.118 [2024-12-13 23:00:57.115415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:18.118 [2024-12-13 23:00:57.115420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:18.118 [2024-12-13 23:00:57.115425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:18.118 [2024-12-13 23:00:57.115430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.118 [2024-12-13 23:00:57.115436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:18.118 [2024-12-13 23:00:57.115442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:20:18.118 [2024-12-13 23:00:57.115448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.118 [2024-12-13 23:00:57.124384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.118 [2024-12-13 23:00:57.124406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:18.118 [2024-12-13 23:00:57.124414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.923 ms 00:20:18.118 [2024-12-13 23:00:57.124420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.118 [2024-12-13 23:00:57.124693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.118 [2024-12-13 23:00:57.124709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:18.118 [2024-12-13 23:00:57.124715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:20:18.118 [2024-12-13 23:00:57.124720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.118 [2024-12-13 23:00:57.152251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.118 [2024-12-13 23:00:57.152279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.118 [2024-12-13 23:00:57.152287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.118 [2024-12-13 23:00:57.152296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.118 [2024-12-13 23:00:57.152347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.118 [2024-12-13 23:00:57.152354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.118 [2024-12-13 23:00:57.152360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.118 [2024-12-13 23:00:57.152365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.118 [2024-12-13 23:00:57.152394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.118 [2024-12-13 23:00:57.152401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.118 [2024-12-13 23:00:57.152407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.118 [2024-12-13 23:00:57.152412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.118 [2024-12-13 23:00:57.152427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.118 [2024-12-13 23:00:57.152433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.118 [2024-12-13 23:00:57.152439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.118 [2024-12-13 23:00:57.152444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.118 [2024-12-13 23:00:57.210743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.118 [2024-12-13 23:00:57.210782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.118 [2024-12-13 23:00:57.210791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.118 [2024-12-13 23:00:57.210796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.377 [2024-12-13 23:00:57.258456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.377 [2024-12-13 23:00:57.258489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.377 [2024-12-13 23:00:57.258497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.377 [2024-12-13 23:00:57.258504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.377 [2024-12-13 23:00:57.258541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.377 [2024-12-13 23:00:57.258548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.377 [2024-12-13 23:00:57.258555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.377 [2024-12-13 23:00:57.258561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.377 [2024-12-13 23:00:57.258584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.377 [2024-12-13 23:00:57.258593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.377 [2024-12-13 23:00:57.258599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.377 [2024-12-13 23:00:57.258605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.377 [2024-12-13 23:00:57.258672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.377 [2024-12-13 23:00:57.258680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.377 [2024-12-13 23:00:57.258686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.377 [2024-12-13 23:00:57.258692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.377 [2024-12-13 23:00:57.258715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.377 [2024-12-13 23:00:57.258722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:18.377 [2024-12-13 23:00:57.258730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.377 [2024-12-13 23:00:57.258735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.377 [2024-12-13 23:00:57.258778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.377 [2024-12-13 23:00:57.258785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.377 [2024-12-13 23:00:57.258791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.377 [2024-12-13 23:00:57.258797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.377 [2024-12-13 23:00:57.258830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.377 [2024-12-13 23:00:57.258840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.377 [2024-12-13 23:00:57.258846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.377 [2024-12-13 23:00:57.258852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.377 [2024-12-13 23:00:57.258956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 254.301 ms, result 0 00:20:18.945 00:20:18.945 00:20:18.945 23:00:57 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78719 00:20:18.945 23:00:57 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78719 00:20:18.945 23:00:57 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:18.945 23:00:57 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78719 ']' 00:20:18.945 23:00:57 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.945 23:00:57 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.945 23:00:57 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.945 23:00:57 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.945 23:00:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:18.945 [2024-12-13 23:00:57.900273] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:18.945 [2024-12-13 23:00:57.900396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78719 ] 00:20:18.945 [2024-12-13 23:00:58.057541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.202 [2024-12-13 23:00:58.144217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.819 23:00:58 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.819 23:00:58 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:19.819 23:00:58 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:19.819 [2024-12-13 23:00:58.945506] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.819 [2024-12-13 23:00:58.945559] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.094 [2024-12-13 23:00:59.119368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.119423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:20.094 [2024-12-13 23:00:59.119445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:20.094 [2024-12-13 23:00:59.119458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.094 [2024-12-13 23:00:59.122359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.122534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.094 [2024-12-13 23:00:59.122564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.875 ms 00:20:20.094 [2024-12-13 23:00:59.122575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.094 [2024-12-13 23:00:59.123313] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:20.094 [2024-12-13 23:00:59.124314] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:20.094 [2024-12-13 23:00:59.124358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.124372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.094 [2024-12-13 23:00:59.124389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:20:20.094 [2024-12-13 23:00:59.124401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.094 [2024-12-13 23:00:59.125996] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:20.094 [2024-12-13 23:00:59.139919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.139970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:20.094 [2024-12-13 23:00:59.139990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.928 ms 00:20:20.094 [2024-12-13 23:00:59.140005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.094 [2024-12-13 23:00:59.140136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.140157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:20.094 [2024-12-13 23:00:59.140172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:20.094 [2024-12-13 23:00:59.140187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.094 [2024-12-13 23:00:59.147476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.147525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.094 [2024-12-13 23:00:59.147540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.211 ms 00:20:20.094 [2024-12-13 23:00:59.147553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.094 [2024-12-13 23:00:59.147698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.147717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.094 [2024-12-13 23:00:59.147734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:20.094 [2024-12-13 23:00:59.147751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.094 [2024-12-13 23:00:59.147832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.147850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:20.094 [2024-12-13 23:00:59.147864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:20.094 [2024-12-13 23:00:59.147878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.094 [2024-12-13 23:00:59.147912] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:20.094 [2024-12-13 23:00:59.151744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.151801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.094 [2024-12-13 23:00:59.151820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.837 ms 00:20:20.094 [2024-12-13 23:00:59.151831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.094 [2024-12-13 23:00:59.151931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.151947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:20.094 [2024-12-13 23:00:59.151968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:20.094 [2024-12-13 23:00:59.151979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.094 [2024-12-13 23:00:59.152014] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:20.094 [2024-12-13 23:00:59.152044] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:20.094 [2024-12-13 23:00:59.152107] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:20.094 [2024-12-13 23:00:59.152131] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:20.094 [2024-12-13 23:00:59.152281] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:20.094 [2024-12-13 23:00:59.152301] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:20.094 [2024-12-13 23:00:59.152320] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:20.094 [2024-12-13 23:00:59.152336] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:20.094 [2024-12-13 23:00:59.152353] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:20.094 [2024-12-13 23:00:59.152367] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:20.094 [2024-12-13 23:00:59.152382] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:20.094 [2024-12-13 23:00:59.152394] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:20.094 [2024-12-13 23:00:59.152411] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:20.094 [2024-12-13 23:00:59.152424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.152439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:20.094 [2024-12-13 23:00:59.152452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:20:20.094 [2024-12-13 23:00:59.152469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.094 [2024-12-13 23:00:59.152592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.094 [2024-12-13 23:00:59.152608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:20.094 [2024-12-13 23:00:59.152622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:20.094 [2024-12-13 23:00:59.152637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.095 [2024-12-13 23:00:59.152792] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:20.095 [2024-12-13 23:00:59.152813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:20.095 [2024-12-13 23:00:59.152826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.095 [2024-12-13 23:00:59.152842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.095 [2024-12-13 23:00:59.152861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:20.095 [2024-12-13 23:00:59.152876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:20.095 [2024-12-13 23:00:59.152888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:20.095 [2024-12-13 23:00:59.152913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:20.095 [2024-12-13 23:00:59.152935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:20.095 [2024-12-13 23:00:59.152950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.095 [2024-12-13 23:00:59.152961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:20.095 [2024-12-13 23:00:59.152976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:20.095 [2024-12-13 23:00:59.152987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.095 [2024-12-13 23:00:59.153001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:20.095 [2024-12-13 23:00:59.153013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:20.095 [2024-12-13 23:00:59.153027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.095 [2024-12-13 23:00:59.153039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:20.095 [2024-12-13 23:00:59.153054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:20.095 [2024-12-13 23:00:59.153073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.095 [2024-12-13 23:00:59.153088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:20.095 [2024-12-13 23:00:59.153099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:20.095 [2024-12-13 23:00:59.153113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.095 [2024-12-13 23:00:59.153124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:20.095 [2024-12-13 23:00:59.153141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:20.095 [2024-12-13 23:00:59.153151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.095 [2024-12-13 23:00:59.153165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:20.095 [2024-12-13 23:00:59.153177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:20.095 [2024-12-13 23:00:59.153192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.095 [2024-12-13 23:00:59.153204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:20.095 [2024-12-13 23:00:59.153218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:20.095 [2024-12-13 23:00:59.153229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.095 [2024-12-13 23:00:59.153243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:20.095 [2024-12-13 23:00:59.153255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:20.095 [2024-12-13 23:00:59.153270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.095 [2024-12-13 23:00:59.153282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:20.095 [2024-12-13 23:00:59.153297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:20.095 [2024-12-13 23:00:59.153308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.095 [2024-12-13 23:00:59.153322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:20.095 [2024-12-13 23:00:59.153334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:20.095 [2024-12-13 23:00:59.153350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.095 [2024-12-13 23:00:59.153362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:20.095 [2024-12-13 23:00:59.153376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:20.095 [2024-12-13 23:00:59.153387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.095 [2024-12-13 23:00:59.153401] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:20.095 [2024-12-13 23:00:59.153414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:20.095 [2024-12-13 23:00:59.153429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.095 [2024-12-13 23:00:59.153441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.095 [2024-12-13 23:00:59.153457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:20.095 [2024-12-13 23:00:59.153469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:20.095 [2024-12-13 23:00:59.153483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:20.095 [2024-12-13 23:00:59.153496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:20.095 [2024-12-13 23:00:59.153512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:20.095 [2024-12-13 23:00:59.153525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:20.095 [2024-12-13 23:00:59.153541] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:20.095 [2024-12-13 23:00:59.153557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.095 [2024-12-13 23:00:59.153580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:20.095 [2024-12-13 23:00:59.153593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:20.095 [2024-12-13 23:00:59.153608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:20.095 [2024-12-13 23:00:59.153620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:20.095 [2024-12-13 23:00:59.153636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:20.095 [2024-12-13 23:00:59.153648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:20.095 [2024-12-13 23:00:59.153663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:20.095 [2024-12-13 23:00:59.153676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:20.095 [2024-12-13 23:00:59.153689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:20.095 [2024-12-13 23:00:59.153702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:20.095 [2024-12-13 23:00:59.153717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:20.095 [2024-12-13 23:00:59.153729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:20.095 [2024-12-13 23:00:59.153743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:20.095 [2024-12-13 23:00:59.153769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:20.095 [2024-12-13 23:00:59.153784] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:20.095 [2024-12-13 23:00:59.153798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.095 [2024-12-13 23:00:59.153819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:20.095 [2024-12-13 23:00:59.153832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:20.095 [2024-12-13 23:00:59.153847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:20.095 [2024-12-13 23:00:59.153860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:20.095 [2024-12-13 23:00:59.153876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.095 [2024-12-13 23:00:59.153888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:20.095 [2024-12-13 23:00:59.153904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:20:20.095 [2024-12-13 23:00:59.153918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.095 [2024-12-13 23:00:59.184403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.095 [2024-12-13 23:00:59.184452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.095 [2024-12-13 23:00:59.184473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.392 ms 00:20:20.095 [2024-12-13 23:00:59.184484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.095 [2024-12-13 23:00:59.184653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.095 [2024-12-13 23:00:59.184676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:20.095 [2024-12-13 23:00:59.184691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:20.095 [2024-12-13 23:00:59.184703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.095 [2024-12-13 23:00:59.219243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.095 [2024-12-13 23:00:59.219289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.095 [2024-12-13 23:00:59.219308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.502 ms 00:20:20.095 [2024-12-13 23:00:59.219319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.095 [2024-12-13 23:00:59.219436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.095 [2024-12-13 23:00:59.219451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.095 [2024-12-13 23:00:59.219467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:20.095 [2024-12-13 23:00:59.219481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.095 [2024-12-13 23:00:59.220099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.095 [2024-12-13 23:00:59.220145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.095 [2024-12-13 23:00:59.220162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:20:20.095 [2024-12-13 23:00:59.220173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.095 [2024-12-13 23:00:59.220381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.095 [2024-12-13 23:00:59.220403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.095 [2024-12-13 23:00:59.220421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:20:20.095 [2024-12-13 23:00:59.220432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.238548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.238599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.357 [2024-12-13 23:00:59.238617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.079 ms 00:20:20.357 [2024-12-13 23:00:59.238628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.271170] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:20.357 [2024-12-13 23:00:59.271368] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:20.357 [2024-12-13 23:00:59.271407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.271420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:20.357 [2024-12-13 23:00:59.271437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.590 ms 00:20:20.357 [2024-12-13 23:00:59.271457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.297885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.298080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:20.357 [2024-12-13 23:00:59.298116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.155 ms 00:20:20.357 [2024-12-13 23:00:59.298132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.320021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.320081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:20.357 [2024-12-13 23:00:59.320102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.461 ms 00:20:20.357 [2024-12-13 23:00:59.320110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.333124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.333321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:20.357 [2024-12-13 23:00:59.333351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.893 ms 00:20:20.357 [2024-12-13 23:00:59.333359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.334047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.334077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:20.357 [2024-12-13 23:00:59.334090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:20:20.357 [2024-12-13 23:00:59.334098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.400131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.400389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:20.357 [2024-12-13 23:00:59.400421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.999 ms 00:20:20.357 [2024-12-13 23:00:59.400431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.412057] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:20.357 [2024-12-13 23:00:59.431515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.431750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:20.357 [2024-12-13 23:00:59.431810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.900 ms 00:20:20.357 [2024-12-13 23:00:59.431822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.431938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.431953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:20.357 [2024-12-13 23:00:59.431962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:20.357 [2024-12-13 23:00:59.431973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.432030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.432041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:20.357 [2024-12-13 23:00:59.432053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:20.357 [2024-12-13 23:00:59.432064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.432090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.432104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:20.357 [2024-12-13 23:00:59.432113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:20.357 [2024-12-13 23:00:59.432123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.432159] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:20.357 [2024-12-13 23:00:59.432178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.432187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:20.357 [2024-12-13 23:00:59.432197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:20.357 [2024-12-13 23:00:59.432207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.458546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.458603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:20.357 [2024-12-13 23:00:59.458621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.307 ms 00:20:20.357 [2024-12-13 23:00:59.458629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.458792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.357 [2024-12-13 23:00:59.458805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:20.357 [2024-12-13 23:00:59.458821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:20.357 [2024-12-13 23:00:59.458829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.357 [2024-12-13 23:00:59.459941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:20.357 [2024-12-13 23:00:59.463415] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.189 ms, result 0 00:20:20.357 [2024-12-13 23:00:59.465799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:20.357 Some configs were skipped because the RPC state that can call them passed over. 00:20:20.619 23:00:59 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:20.619 [2024-12-13 23:00:59.731550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.619 [2024-12-13 23:00:59.731910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:20.619 [2024-12-13 23:00:59.732071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.488 ms 00:20:20.619 [2024-12-13 23:00:59.732123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.619 [2024-12-13 23:00:59.732315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 4.252 ms, result 0 00:20:20.619 true 00:20:20.619 23:00:59 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:20.880 [2024-12-13 23:00:59.955410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.880 [2024-12-13 23:00:59.955477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:20.880 [2024-12-13 23:00:59.955496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:20:20.880 [2024-12-13 23:00:59.955506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.880 [2024-12-13 23:00:59.955550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.871 ms, result 0 00:20:20.880 true 00:20:20.880 23:00:59 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78719 00:20:20.880 23:00:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78719 ']' 00:20:20.880 23:00:59 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78719 00:20:20.880 23:00:59 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:20.880 23:00:59 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.880 23:00:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78719 00:20:20.880 23:01:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.880 23:01:00 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.880 23:01:00 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78719' 00:20:20.880 killing process with pid 78719 00:20:20.880 23:01:00 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78719 00:20:20.880 23:01:00 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78719 00:20:21.820 [2024-12-13 23:01:00.700048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.820 [2024-12-13 23:01:00.700271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:21.820 [2024-12-13 23:01:00.700327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:21.820 [2024-12-13 23:01:00.700350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.820 [2024-12-13 23:01:00.700384] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:21.820 [2024-12-13 23:01:00.702455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.820 [2024-12-13 23:01:00.702549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:21.820 [2024-12-13 23:01:00.702602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.039 ms 00:20:21.820 [2024-12-13 23:01:00.702619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.820 [2024-12-13 23:01:00.702867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.820 [2024-12-13 23:01:00.702890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:21.820 [2024-12-13 23:01:00.702906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:20:21.820 [2024-12-13 23:01:00.702956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.820 [2024-12-13 23:01:00.706008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.820 [2024-12-13 23:01:00.706098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:21.820 [2024-12-13 23:01:00.706111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.022 ms 00:20:21.820 [2024-12-13 23:01:00.706118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.820 [2024-12-13 23:01:00.711331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.820 [2024-12-13 23:01:00.711416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:21.820 [2024-12-13 23:01:00.711465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:20:21.820 [2024-12-13 23:01:00.711483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.820 [2024-12-13 23:01:00.718853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.820 [2024-12-13 23:01:00.718951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:21.820 [2024-12-13 23:01:00.719001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.304 ms 00:20:21.820 [2024-12-13 23:01:00.719018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.820 [2024-12-13 23:01:00.725481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.820 [2024-12-13 23:01:00.725571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:21.820 [2024-12-13 23:01:00.725677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.426 ms 00:20:21.820 [2024-12-13 23:01:00.725730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.820 [2024-12-13 23:01:00.725855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.820 [2024-12-13 23:01:00.725904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:21.820 [2024-12-13 23:01:00.725932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:21.820 [2024-12-13 23:01:00.726020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.820 [2024-12-13 23:01:00.733957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.820 [2024-12-13 23:01:00.734048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:21.820 [2024-12-13 23:01:00.734094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.898 ms 00:20:21.820 [2024-12-13 23:01:00.734110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.820 [2024-12-13 23:01:00.741128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.820 [2024-12-13 23:01:00.741210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:21.820 [2024-12-13 23:01:00.741313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.982 ms 00:20:21.820 [2024-12-13 23:01:00.741337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.820 [2024-12-13 23:01:00.748040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.820 [2024-12-13 23:01:00.748121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:21.820 [2024-12-13 23:01:00.748185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.655 ms 00:20:21.820 [2024-12-13 23:01:00.748204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.820 [2024-12-13 23:01:00.755458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.821 [2024-12-13 23:01:00.755549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:21.821 [2024-12-13 23:01:00.755589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.196 ms 00:20:21.821 [2024-12-13 23:01:00.755605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.821 [2024-12-13 23:01:00.755649] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:21.821 [2024-12-13 23:01:00.755673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.755700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.755792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.755824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.755847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.755873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.755942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.755968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.755991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:21.821 [2024-12-13 23:01:00.756842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.756995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.757001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.757007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.757014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:21.822 [2024-12-13 23:01:00.757030] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:21.822 [2024-12-13 23:01:00.757042] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9143edc-1a7a-4c95-b8e5-8cccde6ada68 00:20:21.822 [2024-12-13 23:01:00.757048] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:21.822 [2024-12-13 23:01:00.757055] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:21.822 [2024-12-13 23:01:00.757060] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:21.822 [2024-12-13 23:01:00.757067] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:21.822 [2024-12-13 23:01:00.757072] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:21.822 [2024-12-13 23:01:00.757079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:21.822 [2024-12-13 23:01:00.757085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:21.822 [2024-12-13 23:01:00.757091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:21.822 [2024-12-13 23:01:00.757096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:21.822 [2024-12-13 23:01:00.757102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.822 [2024-12-13 23:01:00.757108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:21.822 [2024-12-13 23:01:00.757115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:20:21.822 [2024-12-13 23:01:00.757122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.822 [2024-12-13 23:01:00.768104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.822 [2024-12-13 23:01:00.768203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:21.822 [2024-12-13 23:01:00.768252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.960 ms 00:20:21.822 [2024-12-13 23:01:00.768270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.822 [2024-12-13 23:01:00.768572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.822 [2024-12-13 23:01:00.768649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:21.822 [2024-12-13 23:01:00.768696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:20:21.822 [2024-12-13 23:01:00.768737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.822 [2024-12-13 23:01:00.821629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.822 [2024-12-13 23:01:00.821768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.822 [2024-12-13 23:01:00.821844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.822 [2024-12-13 23:01:00.821921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.822 [2024-12-13 23:01:00.822045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.822 [2024-12-13 23:01:00.822078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.822 [2024-12-13 23:01:00.822122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.822 [2024-12-13 23:01:00.822143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.822 [2024-12-13 23:01:00.822209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.822 [2024-12-13 23:01:00.822363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.822 [2024-12-13 23:01:00.822400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.822 [2024-12-13 23:01:00.822423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.822 [2024-12-13 23:01:00.822509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.822 [2024-12-13 23:01:00.822540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.822 [2024-12-13 23:01:00.822567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.822 [2024-12-13 23:01:00.822623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.822 [2024-12-13 23:01:00.916892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.822 [2024-12-13 23:01:00.917022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.822 [2024-12-13 23:01:00.917077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.822 [2024-12-13 23:01:00.917101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.083 [2024-12-13 23:01:00.965804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.083 [2024-12-13 23:01:00.965910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:22.083 [2024-12-13 23:01:00.965955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.083 [2024-12-13 23:01:00.965973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.083 [2024-12-13 23:01:00.966053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.083 [2024-12-13 23:01:00.966102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:22.083 [2024-12-13 23:01:00.966124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.083 [2024-12-13 23:01:00.966139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.083 [2024-12-13 23:01:00.966192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.083 [2024-12-13 23:01:00.966210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:22.083 [2024-12-13 23:01:00.966227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.083 [2024-12-13 23:01:00.966243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.083 [2024-12-13 23:01:00.966334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.083 [2024-12-13 23:01:00.966401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:22.083 [2024-12-13 23:01:00.966456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.083 [2024-12-13 23:01:00.966470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.083 [2024-12-13 23:01:00.966508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.083 [2024-12-13 23:01:00.966525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:22.083 [2024-12-13 23:01:00.966542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.083 [2024-12-13 23:01:00.966557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.083 [2024-12-13 23:01:00.966597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.083 [2024-12-13 23:01:00.966660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:22.083 [2024-12-13 23:01:00.966683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.083 [2024-12-13 23:01:00.966698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.083 [2024-12-13 23:01:00.966745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.083 [2024-12-13 23:01:00.966781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:22.083 [2024-12-13 23:01:00.966800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.083 [2024-12-13 23:01:00.966816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.083 [2024-12-13 23:01:00.967005] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.936 ms, result 0 00:20:22.652 23:01:01 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:22.652 [2024-12-13 23:01:01.546409] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:22.652 [2024-12-13 23:01:01.546706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78772 ] 00:20:22.652 [2024-12-13 23:01:01.701968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.652 [2024-12-13 23:01:01.785667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.910 [2024-12-13 23:01:02.032903] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.910 [2024-12-13 23:01:02.033096] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.172 [2024-12-13 23:01:02.193185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.172 [2024-12-13 23:01:02.193357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:23.172 [2024-12-13 23:01:02.193430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:23.172 [2024-12-13 23:01:02.193455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.172 [2024-12-13 23:01:02.196250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.172 [2024-12-13 23:01:02.196386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:23.172 [2024-12-13 23:01:02.196446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.758 ms 00:20:23.172 [2024-12-13 23:01:02.196469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.172 [2024-12-13 23:01:02.196655] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:23.172 [2024-12-13 23:01:02.197524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:23.172 [2024-12-13 23:01:02.197552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.172 [2024-12-13 23:01:02.197560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:23.172 [2024-12-13 23:01:02.197570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:20:23.172 [2024-12-13 23:01:02.197577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.172 [2024-12-13 23:01:02.198909] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:23.172 [2024-12-13 23:01:02.212384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.172 [2024-12-13 23:01:02.212425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:23.172 [2024-12-13 23:01:02.212437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.476 ms 00:20:23.172 [2024-12-13 23:01:02.212444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.172 [2024-12-13 23:01:02.212550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.172 [2024-12-13 23:01:02.212562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:23.172 [2024-12-13 23:01:02.212571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:23.172 [2024-12-13 23:01:02.212579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.172 [2024-12-13 23:01:02.219261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.172 [2024-12-13 23:01:02.219297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:23.172 [2024-12-13 23:01:02.219307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.636 ms 00:20:23.172 [2024-12-13 23:01:02.219314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.172 [2024-12-13 23:01:02.219411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.172 [2024-12-13 23:01:02.219421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:23.172 [2024-12-13 23:01:02.219429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:23.172 [2024-12-13 23:01:02.219437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.172 [2024-12-13 23:01:02.219464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.172 [2024-12-13 23:01:02.219472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:23.172 [2024-12-13 23:01:02.219480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:23.172 [2024-12-13 23:01:02.219488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.172 [2024-12-13 23:01:02.219507] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:23.172 [2024-12-13 23:01:02.223174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.172 [2024-12-13 23:01:02.223206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:23.172 [2024-12-13 23:01:02.223216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:20:23.172 [2024-12-13 23:01:02.223223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.172 [2024-12-13 23:01:02.223280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.172 [2024-12-13 23:01:02.223289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:23.172 [2024-12-13 23:01:02.223298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:23.172 [2024-12-13 23:01:02.223305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.172 [2024-12-13 23:01:02.223327] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:23.172 [2024-12-13 23:01:02.223349] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:23.172 [2024-12-13 23:01:02.223384] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:23.172 [2024-12-13 23:01:02.223399] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:23.172 [2024-12-13 23:01:02.223503] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:23.172 [2024-12-13 23:01:02.223515] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:23.172 [2024-12-13 23:01:02.223526] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:23.172 [2024-12-13 23:01:02.223540] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:23.172 [2024-12-13 23:01:02.223548] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:23.172 [2024-12-13 23:01:02.223557] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:23.172 [2024-12-13 23:01:02.223564] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:23.172 [2024-12-13 23:01:02.223571] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:23.172 [2024-12-13 23:01:02.223578] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:23.172 [2024-12-13 23:01:02.223586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.172 [2024-12-13 23:01:02.223594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:23.172 [2024-12-13 23:01:02.223603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:20:23.172 [2024-12-13 23:01:02.223610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.173 [2024-12-13 23:01:02.223712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.173 [2024-12-13 23:01:02.223725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:23.173 [2024-12-13 23:01:02.223733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:23.173 [2024-12-13 23:01:02.223741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.173 [2024-12-13 23:01:02.223869] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:23.173 [2024-12-13 23:01:02.223881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:23.173 [2024-12-13 23:01:02.223889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.173 [2024-12-13 23:01:02.223897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.173 [2024-12-13 23:01:02.223905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:23.173 [2024-12-13 23:01:02.223912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:23.173 [2024-12-13 23:01:02.223919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:23.173 [2024-12-13 23:01:02.223927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:23.173 [2024-12-13 23:01:02.223934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:23.173 [2024-12-13 23:01:02.223941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.173 [2024-12-13 23:01:02.223948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:23.173 [2024-12-13 23:01:02.223961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:23.173 [2024-12-13 23:01:02.223968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.173 [2024-12-13 23:01:02.223976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:23.173 [2024-12-13 23:01:02.223983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:23.173 [2024-12-13 23:01:02.223990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.173 [2024-12-13 23:01:02.223996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:23.173 [2024-12-13 23:01:02.224003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:23.173 [2024-12-13 23:01:02.224009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.173 [2024-12-13 23:01:02.224016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:23.173 [2024-12-13 23:01:02.224022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:23.173 [2024-12-13 23:01:02.224029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.173 [2024-12-13 23:01:02.224036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:23.173 [2024-12-13 23:01:02.224043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:23.173 [2024-12-13 23:01:02.224050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.173 [2024-12-13 23:01:02.224056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:23.173 [2024-12-13 23:01:02.224063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:23.173 [2024-12-13 23:01:02.224070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.173 [2024-12-13 23:01:02.224076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:23.173 [2024-12-13 23:01:02.224083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:23.173 [2024-12-13 23:01:02.224090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.173 [2024-12-13 23:01:02.224097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:23.173 [2024-12-13 23:01:02.224103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:23.173 [2024-12-13 23:01:02.224110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.173 [2024-12-13 23:01:02.224116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:23.173 [2024-12-13 23:01:02.224123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:23.173 [2024-12-13 23:01:02.224129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.173 [2024-12-13 23:01:02.224136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:23.173 [2024-12-13 23:01:02.224143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:23.173 [2024-12-13 23:01:02.224150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.173 [2024-12-13 23:01:02.224156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:23.173 [2024-12-13 23:01:02.224163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:23.173 [2024-12-13 23:01:02.224170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.173 [2024-12-13 23:01:02.224176] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:23.173 [2024-12-13 23:01:02.224184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:23.173 [2024-12-13 23:01:02.224197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.173 [2024-12-13 23:01:02.224204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.173 [2024-12-13 23:01:02.224211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:23.173 [2024-12-13 23:01:02.224218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:23.173 [2024-12-13 23:01:02.224225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:23.173 [2024-12-13 23:01:02.224231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:23.173 [2024-12-13 23:01:02.224239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:23.173 [2024-12-13 23:01:02.224246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:23.173 [2024-12-13 23:01:02.224255] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:23.173 [2024-12-13 23:01:02.224264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.173 [2024-12-13 23:01:02.224272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:23.173 [2024-12-13 23:01:02.224279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:23.173 [2024-12-13 23:01:02.224286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:23.173 [2024-12-13 23:01:02.224294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:23.173 [2024-12-13 23:01:02.224300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:23.173 [2024-12-13 23:01:02.224307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:23.173 [2024-12-13 23:01:02.224314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:23.173 [2024-12-13 23:01:02.224322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:23.173 [2024-12-13 23:01:02.224329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:23.173 [2024-12-13 23:01:02.224336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:23.173 [2024-12-13 23:01:02.224343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:23.173 [2024-12-13 23:01:02.224350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:23.173 [2024-12-13 23:01:02.224357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:23.173 [2024-12-13 23:01:02.224365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:23.173 [2024-12-13 23:01:02.224371] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:23.173 [2024-12-13 23:01:02.224379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.173 [2024-12-13 23:01:02.224388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:23.173 [2024-12-13 23:01:02.224396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:23.173 [2024-12-13 23:01:02.224404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:23.173 [2024-12-13 23:01:02.224411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:23.173 [2024-12-13 23:01:02.224418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.173 [2024-12-13 23:01:02.224427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:23.173 [2024-12-13 23:01:02.224435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:20:23.173 [2024-12-13 23:01:02.224441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.173 [2024-12-13 23:01:02.254329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.173 [2024-12-13 23:01:02.254517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:23.173 [2024-12-13 23:01:02.254536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.834 ms 00:20:23.173 [2024-12-13 23:01:02.254545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.173 [2024-12-13 23:01:02.254684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.173 [2024-12-13 23:01:02.254694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:23.173 [2024-12-13 23:01:02.254705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:23.173 [2024-12-13 23:01:02.254712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.173 [2024-12-13 23:01:02.297118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.173 [2024-12-13 23:01:02.297324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:23.173 [2024-12-13 23:01:02.297352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.382 ms 00:20:23.173 [2024-12-13 23:01:02.297362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.173 [2024-12-13 23:01:02.297483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.173 [2024-12-13 23:01:02.297496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:23.173 [2024-12-13 23:01:02.297506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:23.173 [2024-12-13 23:01:02.297515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.173 [2024-12-13 23:01:02.298074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.173 [2024-12-13 23:01:02.298113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:23.173 [2024-12-13 23:01:02.298125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:20:23.174 [2024-12-13 23:01:02.298139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.174 [2024-12-13 23:01:02.298296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.174 [2024-12-13 23:01:02.298307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:23.174 [2024-12-13 23:01:02.298315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:20:23.174 [2024-12-13 23:01:02.298323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.442 [2024-12-13 23:01:02.314825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.442 [2024-12-13 23:01:02.314869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:23.442 [2024-12-13 23:01:02.314880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.478 ms 00:20:23.442 [2024-12-13 23:01:02.314889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.442 [2024-12-13 23:01:02.329146] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:23.442 [2024-12-13 23:01:02.329194] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:23.442 [2024-12-13 23:01:02.329207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.442 [2024-12-13 23:01:02.329216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:23.442 [2024-12-13 23:01:02.329226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.198 ms 00:20:23.442 [2024-12-13 23:01:02.329233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.354917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.354970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:23.443 [2024-12-13 23:01:02.354983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.585 ms 00:20:23.443 [2024-12-13 23:01:02.354992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.368006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.368061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:23.443 [2024-12-13 23:01:02.368073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.916 ms 00:20:23.443 [2024-12-13 23:01:02.368081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.382492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.382550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:23.443 [2024-12-13 23:01:02.382564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.742 ms 00:20:23.443 [2024-12-13 23:01:02.382573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.383255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.383288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:23.443 [2024-12-13 23:01:02.383300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:20:23.443 [2024-12-13 23:01:02.383309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.447643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.447710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:23.443 [2024-12-13 23:01:02.447726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.306 ms 00:20:23.443 [2024-12-13 23:01:02.447735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.458879] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:23.443 [2024-12-13 23:01:02.477777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.477827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:23.443 [2024-12-13 23:01:02.477839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.904 ms 00:20:23.443 [2024-12-13 23:01:02.477854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.477945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.477957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:23.443 [2024-12-13 23:01:02.477968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:23.443 [2024-12-13 23:01:02.477977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.478041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.478053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:23.443 [2024-12-13 23:01:02.478062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:23.443 [2024-12-13 23:01:02.478073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.478105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.478114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:23.443 [2024-12-13 23:01:02.478123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:23.443 [2024-12-13 23:01:02.478131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.478170] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:23.443 [2024-12-13 23:01:02.478181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.478190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:23.443 [2024-12-13 23:01:02.478199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:23.443 [2024-12-13 23:01:02.478207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.504236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.504304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:23.443 [2024-12-13 23:01:02.504318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.007 ms 00:20:23.443 [2024-12-13 23:01:02.504326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.504483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.443 [2024-12-13 23:01:02.504497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:23.443 [2024-12-13 23:01:02.504507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:23.443 [2024-12-13 23:01:02.504515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.443 [2024-12-13 23:01:02.505688] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:23.443 [2024-12-13 23:01:02.509300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.174 ms, result 0 00:20:23.443 [2024-12-13 23:01:02.510736] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:23.443 [2024-12-13 23:01:02.524355] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.832  [2024-12-13T23:01:04.924Z] Copying: 17/256 [MB] (17 MBps) [2024-12-13T23:01:05.861Z] Copying: 28/256 [MB] (11 MBps) [2024-12-13T23:01:06.807Z] Copying: 62/256 [MB] (33 MBps) [2024-12-13T23:01:07.760Z] Copying: 77/256 [MB] (15 MBps) [2024-12-13T23:01:08.707Z] Copying: 89/256 [MB] (12 MBps) [2024-12-13T23:01:09.653Z] Copying: 111/256 [MB] (22 MBps) [2024-12-13T23:01:10.597Z] Copying: 128/256 [MB] (16 MBps) [2024-12-13T23:01:11.980Z] Copying: 148/256 [MB] (20 MBps) [2024-12-13T23:01:12.925Z] Copying: 166/256 [MB] (17 MBps) [2024-12-13T23:01:13.871Z] Copying: 176/256 [MB] (10 MBps) [2024-12-13T23:01:14.816Z] Copying: 186/256 [MB] (10 MBps) [2024-12-13T23:01:15.761Z] Copying: 196/256 [MB] (10 MBps) [2024-12-13T23:01:16.711Z] Copying: 206/256 [MB] (10 MBps) [2024-12-13T23:01:17.657Z] Copying: 221/256 [MB] (14 MBps) [2024-12-13T23:01:18.601Z] Copying: 240/256 [MB] (18 MBps) [2024-12-13T23:01:18.861Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-13 23:01:18.799702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.721 [2024-12-13 23:01:18.811060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.721 [2024-12-13 23:01:18.811114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:39.721 [2024-12-13 23:01:18.811136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.721 [2024-12-13 23:01:18.811146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.721 [2024-12-13 23:01:18.811175] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:39.721 [2024-12-13 23:01:18.814127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.721 [2024-12-13 23:01:18.814169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:39.721 [2024-12-13 23:01:18.814183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.936 ms 00:20:39.722 [2024-12-13 23:01:18.814192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.722 [2024-12-13 23:01:18.814491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.722 [2024-12-13 23:01:18.814502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:39.722 [2024-12-13 23:01:18.814513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:39.722 [2024-12-13 23:01:18.814522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.722 [2024-12-13 23:01:18.818574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.722 [2024-12-13 23:01:18.818603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:39.722 [2024-12-13 23:01:18.818614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.029 ms 00:20:39.722 [2024-12-13 23:01:18.818622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.722 [2024-12-13 23:01:18.826148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.722 [2024-12-13 23:01:18.826190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:39.722 [2024-12-13 23:01:18.826202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.503 ms 00:20:39.722 [2024-12-13 23:01:18.826211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.722 [2024-12-13 23:01:18.854790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.722 [2024-12-13 23:01:18.854845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:39.722 [2024-12-13 23:01:18.854858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.501 ms 00:20:39.722 [2024-12-13 23:01:18.854867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.982 [2024-12-13 23:01:18.871892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.982 [2024-12-13 23:01:18.871942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:39.982 [2024-12-13 23:01:18.871963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.947 ms 00:20:39.982 [2024-12-13 23:01:18.871973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.982 [2024-12-13 23:01:18.872147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.982 [2024-12-13 23:01:18.872160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:39.982 [2024-12-13 23:01:18.872179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:39.982 [2024-12-13 23:01:18.872188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.982 [2024-12-13 23:01:18.898181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.982 [2024-12-13 23:01:18.898232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:39.982 [2024-12-13 23:01:18.898245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.975 ms 00:20:39.982 [2024-12-13 23:01:18.898253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.982 [2024-12-13 23:01:18.923677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.982 [2024-12-13 23:01:18.923724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:39.982 [2024-12-13 23:01:18.923736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.356 ms 00:20:39.982 [2024-12-13 23:01:18.923744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.982 [2024-12-13 23:01:18.948560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.982 [2024-12-13 23:01:18.948605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:39.982 [2024-12-13 23:01:18.948618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.727 ms 00:20:39.982 [2024-12-13 23:01:18.948626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.982 [2024-12-13 23:01:18.973037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.983 [2024-12-13 23:01:18.973085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:39.983 [2024-12-13 23:01:18.973097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.325 ms 00:20:39.983 [2024-12-13 23:01:18.973105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.983 [2024-12-13 23:01:18.973155] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:39.983 [2024-12-13 23:01:18.973173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:39.983 [2024-12-13 23:01:18.973871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.973993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:39.984 [2024-12-13 23:01:18.974010] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:39.984 [2024-12-13 23:01:18.974019] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9143edc-1a7a-4c95-b8e5-8cccde6ada68 00:20:39.984 [2024-12-13 23:01:18.974029] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:39.984 [2024-12-13 23:01:18.974037] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:39.984 [2024-12-13 23:01:18.974044] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:39.984 [2024-12-13 23:01:18.974053] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:39.984 [2024-12-13 23:01:18.974061] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:39.984 [2024-12-13 23:01:18.974070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:39.984 [2024-12-13 23:01:18.974081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:39.984 [2024-12-13 23:01:18.974088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:39.984 [2024-12-13 23:01:18.974095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:39.984 [2024-12-13 23:01:18.974102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.984 [2024-12-13 23:01:18.974110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:39.984 [2024-12-13 23:01:18.974119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:20:39.984 [2024-12-13 23:01:18.974128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.984 [2024-12-13 23:01:18.987751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.984 [2024-12-13 23:01:18.987870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:39.984 [2024-12-13 23:01:18.987883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.589 ms 00:20:39.984 [2024-12-13 23:01:18.987891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.984 [2024-12-13 23:01:18.988305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.984 [2024-12-13 23:01:18.988327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:39.984 [2024-12-13 23:01:18.988338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:20:39.984 [2024-12-13 23:01:18.988345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.984 [2024-12-13 23:01:19.027375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.984 [2024-12-13 23:01:19.027425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.984 [2024-12-13 23:01:19.027436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.984 [2024-12-13 23:01:19.027451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.984 [2024-12-13 23:01:19.027560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.984 [2024-12-13 23:01:19.027571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.984 [2024-12-13 23:01:19.027582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.984 [2024-12-13 23:01:19.027590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.984 [2024-12-13 23:01:19.027642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.984 [2024-12-13 23:01:19.027652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.984 [2024-12-13 23:01:19.027660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.984 [2024-12-13 23:01:19.027668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.984 [2024-12-13 23:01:19.027690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.984 [2024-12-13 23:01:19.027699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.984 [2024-12-13 23:01:19.027707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.984 [2024-12-13 23:01:19.027714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.984 [2024-12-13 23:01:19.111574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.984 [2024-12-13 23:01:19.111634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.984 [2024-12-13 23:01:19.111648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.984 [2024-12-13 23:01:19.111656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.244 [2024-12-13 23:01:19.181816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.244 [2024-12-13 23:01:19.181874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.244 [2024-12-13 23:01:19.181887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.244 [2024-12-13 23:01:19.181896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.245 [2024-12-13 23:01:19.181975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.245 [2024-12-13 23:01:19.181985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.245 [2024-12-13 23:01:19.181994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.245 [2024-12-13 23:01:19.182003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.245 [2024-12-13 23:01:19.182036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.245 [2024-12-13 23:01:19.182054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.245 [2024-12-13 23:01:19.182063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.245 [2024-12-13 23:01:19.182071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.245 [2024-12-13 23:01:19.182170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.245 [2024-12-13 23:01:19.182181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.245 [2024-12-13 23:01:19.182189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.245 [2024-12-13 23:01:19.182197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.245 [2024-12-13 23:01:19.182232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.245 [2024-12-13 23:01:19.182241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:40.245 [2024-12-13 23:01:19.182253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.245 [2024-12-13 23:01:19.182261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.245 [2024-12-13 23:01:19.182306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.245 [2024-12-13 23:01:19.182315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.245 [2024-12-13 23:01:19.182324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.245 [2024-12-13 23:01:19.182333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.245 [2024-12-13 23:01:19.182379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.245 [2024-12-13 23:01:19.182393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.245 [2024-12-13 23:01:19.182401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.245 [2024-12-13 23:01:19.182409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.245 [2024-12-13 23:01:19.182569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.519 ms, result 0 00:20:40.816 00:20:40.816 00:20:41.076 23:01:19 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:41.693 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:41.693 23:01:20 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:41.693 23:01:20 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:41.693 23:01:20 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:41.693 23:01:20 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:41.693 23:01:20 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:41.693 23:01:20 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:41.693 23:01:20 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78719 00:20:41.693 23:01:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78719 ']' 00:20:41.693 23:01:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78719 00:20:41.693 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78719) - No such process 00:20:41.693 Process with pid 78719 is not found 00:20:41.693 23:01:20 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78719 is not found' 00:20:41.693 00:20:41.693 real 1m22.885s 00:20:41.693 user 1m38.891s 00:20:41.693 sys 0m16.294s 00:20:41.693 23:01:20 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.693 23:01:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:41.693 ************************************ 00:20:41.693 END TEST ftl_trim 00:20:41.693 ************************************ 00:20:41.694 23:01:20 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:41.694 23:01:20 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:41.694 23:01:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.694 23:01:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:41.694 ************************************ 00:20:41.694 START TEST ftl_restore 00:20:41.694 ************************************ 00:20:41.694 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:41.694 * Looking for test storage... 00:20:41.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.694 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:41.694 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:41.694 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.980 23:01:20 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:41.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.980 --rc genhtml_branch_coverage=1 00:20:41.980 --rc genhtml_function_coverage=1 00:20:41.980 --rc genhtml_legend=1 00:20:41.980 --rc geninfo_all_blocks=1 00:20:41.980 --rc geninfo_unexecuted_blocks=1 00:20:41.980 00:20:41.980 ' 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:41.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.980 --rc genhtml_branch_coverage=1 00:20:41.980 --rc genhtml_function_coverage=1 00:20:41.980 --rc genhtml_legend=1 00:20:41.980 --rc geninfo_all_blocks=1 00:20:41.980 --rc geninfo_unexecuted_blocks=1 00:20:41.980 00:20:41.980 ' 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:41.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.980 --rc genhtml_branch_coverage=1 00:20:41.980 --rc genhtml_function_coverage=1 00:20:41.980 --rc genhtml_legend=1 00:20:41.980 --rc geninfo_all_blocks=1 00:20:41.980 --rc geninfo_unexecuted_blocks=1 00:20:41.980 00:20:41.980 ' 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:41.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.980 --rc genhtml_branch_coverage=1 00:20:41.980 --rc genhtml_function_coverage=1 00:20:41.980 --rc genhtml_legend=1 00:20:41.980 --rc geninfo_all_blocks=1 00:20:41.980 --rc geninfo_unexecuted_blocks=1 00:20:41.980 00:20:41.980 ' 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:41.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.uNIEGFPe6z 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79036 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79036 00:20:41.980 23:01:20 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79036 ']' 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.980 23:01:20 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:41.980 [2024-12-13 23:01:20.980300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:41.980 [2024-12-13 23:01:20.980452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79036 ] 00:20:42.241 [2024-12-13 23:01:21.139918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.241 [2024-12-13 23:01:21.258404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.187 23:01:21 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.187 23:01:21 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:43.187 23:01:21 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:43.187 23:01:21 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:43.187 23:01:21 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:43.187 23:01:21 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:43.187 23:01:21 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:43.187 23:01:21 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:43.187 23:01:22 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:43.187 23:01:22 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:43.187 23:01:22 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:43.187 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:43.187 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:43.187 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:43.187 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:43.187 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:43.449 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:43.449 { 00:20:43.449 "name": "nvme0n1", 00:20:43.449 "aliases": [ 00:20:43.449 "11903020-4d3c-4b1e-89bb-2dd8dcc227df" 00:20:43.449 ], 00:20:43.449 "product_name": "NVMe disk", 00:20:43.449 "block_size": 4096, 00:20:43.449 "num_blocks": 1310720, 00:20:43.449 "uuid": "11903020-4d3c-4b1e-89bb-2dd8dcc227df", 00:20:43.449 "numa_id": -1, 00:20:43.449 "assigned_rate_limits": { 00:20:43.449 "rw_ios_per_sec": 0, 00:20:43.449 "rw_mbytes_per_sec": 0, 00:20:43.449 "r_mbytes_per_sec": 0, 00:20:43.449 "w_mbytes_per_sec": 0 00:20:43.449 }, 00:20:43.449 "claimed": true, 00:20:43.449 "claim_type": "read_many_write_one", 00:20:43.449 "zoned": false, 00:20:43.449 "supported_io_types": { 00:20:43.449 "read": true, 00:20:43.449 "write": true, 00:20:43.449 "unmap": true, 00:20:43.449 "flush": true, 00:20:43.449 "reset": true, 00:20:43.449 "nvme_admin": true, 00:20:43.449 "nvme_io": true, 00:20:43.449 "nvme_io_md": false, 00:20:43.449 "write_zeroes": true, 00:20:43.449 "zcopy": false, 00:20:43.449 "get_zone_info": false, 00:20:43.449 "zone_management": false, 00:20:43.449 "zone_append": false, 00:20:43.449 "compare": true, 00:20:43.449 "compare_and_write": false, 00:20:43.449 "abort": true, 00:20:43.449 "seek_hole": false, 00:20:43.449 "seek_data": false, 00:20:43.449 "copy": true, 00:20:43.449 "nvme_iov_md": false 00:20:43.449 }, 00:20:43.449 "driver_specific": { 00:20:43.449 "nvme": [ 00:20:43.449 { 00:20:43.449 "pci_address": "0000:00:11.0", 00:20:43.449 "trid": { 00:20:43.449 "trtype": "PCIe", 00:20:43.449 "traddr": "0000:00:11.0" 00:20:43.449 }, 00:20:43.449 "ctrlr_data": { 00:20:43.449 "cntlid": 0, 00:20:43.449 "vendor_id": "0x1b36", 00:20:43.449 "model_number": "QEMU NVMe Ctrl", 00:20:43.449 "serial_number": "12341", 00:20:43.449 "firmware_revision": "8.0.0", 00:20:43.449 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:43.449 "oacs": { 00:20:43.449 "security": 0, 00:20:43.449 "format": 1, 00:20:43.449 "firmware": 0, 00:20:43.449 "ns_manage": 1 00:20:43.449 }, 00:20:43.449 "multi_ctrlr": false, 00:20:43.449 "ana_reporting": false 00:20:43.449 }, 00:20:43.449 "vs": { 00:20:43.449 "nvme_version": "1.4" 00:20:43.449 }, 00:20:43.449 "ns_data": { 00:20:43.449 "id": 1, 00:20:43.449 "can_share": false 00:20:43.449 } 00:20:43.449 } 00:20:43.449 ], 00:20:43.449 "mp_policy": "active_passive" 00:20:43.449 } 00:20:43.449 } 00:20:43.449 ]' 00:20:43.449 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:43.449 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:43.449 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:43.449 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:43.449 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:43.449 23:01:22 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:43.449 23:01:22 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:43.449 23:01:22 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:43.449 23:01:22 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:43.449 23:01:22 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:43.449 23:01:22 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:43.711 23:01:22 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=54bccaf5-534b-449d-ab48-d8ad67865a06 00:20:43.711 23:01:22 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:43.711 23:01:22 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54bccaf5-534b-449d-ab48-d8ad67865a06 00:20:43.972 23:01:22 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:44.233 23:01:23 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=4939122a-ea42-44d5-a44f-7465ef975327 00:20:44.233 23:01:23 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4939122a-ea42-44d5-a44f-7465ef975327 00:20:44.495 23:01:23 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:44.495 23:01:23 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:44.495 23:01:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:44.495 23:01:23 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:44.495 23:01:23 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:44.495 23:01:23 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:44.495 23:01:23 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:44.495 23:01:23 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:44.495 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:44.495 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:44.495 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:44.495 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:44.495 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:44.756 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:44.756 { 00:20:44.756 "name": "50db3a03-e3ce-43ba-92e5-e2533fc7d65e", 00:20:44.756 "aliases": [ 00:20:44.756 "lvs/nvme0n1p0" 00:20:44.756 ], 00:20:44.756 "product_name": "Logical Volume", 00:20:44.756 "block_size": 4096, 00:20:44.756 "num_blocks": 26476544, 00:20:44.756 "uuid": "50db3a03-e3ce-43ba-92e5-e2533fc7d65e", 00:20:44.756 "assigned_rate_limits": { 00:20:44.756 "rw_ios_per_sec": 0, 00:20:44.756 "rw_mbytes_per_sec": 0, 00:20:44.756 "r_mbytes_per_sec": 0, 00:20:44.756 "w_mbytes_per_sec": 0 00:20:44.756 }, 00:20:44.756 "claimed": false, 00:20:44.756 "zoned": false, 00:20:44.756 "supported_io_types": { 00:20:44.756 "read": true, 00:20:44.756 "write": true, 00:20:44.756 "unmap": true, 00:20:44.756 "flush": false, 00:20:44.756 "reset": true, 00:20:44.756 "nvme_admin": false, 00:20:44.756 "nvme_io": false, 00:20:44.756 "nvme_io_md": false, 00:20:44.756 "write_zeroes": true, 00:20:44.756 "zcopy": false, 00:20:44.756 "get_zone_info": false, 00:20:44.756 "zone_management": false, 00:20:44.756 "zone_append": false, 00:20:44.756 "compare": false, 00:20:44.756 "compare_and_write": false, 00:20:44.756 "abort": false, 00:20:44.757 "seek_hole": true, 00:20:44.757 "seek_data": true, 00:20:44.757 "copy": false, 00:20:44.757 "nvme_iov_md": false 00:20:44.757 }, 00:20:44.757 "driver_specific": { 00:20:44.757 "lvol": { 00:20:44.757 "lvol_store_uuid": "4939122a-ea42-44d5-a44f-7465ef975327", 00:20:44.757 "base_bdev": "nvme0n1", 00:20:44.757 "thin_provision": true, 00:20:44.757 "num_allocated_clusters": 0, 00:20:44.757 "snapshot": false, 00:20:44.757 "clone": false, 00:20:44.757 "esnap_clone": false 00:20:44.757 } 00:20:44.757 } 00:20:44.757 } 00:20:44.757 ]' 00:20:44.757 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:44.757 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:44.757 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:44.757 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:44.757 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:44.757 23:01:23 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:44.757 23:01:23 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:44.757 23:01:23 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:44.757 23:01:23 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:45.016 23:01:24 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:45.016 23:01:24 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:45.016 23:01:24 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:45.016 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:45.016 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:45.016 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:45.016 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:45.016 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:45.275 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:45.275 { 00:20:45.275 "name": "50db3a03-e3ce-43ba-92e5-e2533fc7d65e", 00:20:45.275 "aliases": [ 00:20:45.275 "lvs/nvme0n1p0" 00:20:45.275 ], 00:20:45.275 "product_name": "Logical Volume", 00:20:45.275 "block_size": 4096, 00:20:45.275 "num_blocks": 26476544, 00:20:45.275 "uuid": "50db3a03-e3ce-43ba-92e5-e2533fc7d65e", 00:20:45.275 "assigned_rate_limits": { 00:20:45.275 "rw_ios_per_sec": 0, 00:20:45.275 "rw_mbytes_per_sec": 0, 00:20:45.275 "r_mbytes_per_sec": 0, 00:20:45.275 "w_mbytes_per_sec": 0 00:20:45.275 }, 00:20:45.275 "claimed": false, 00:20:45.275 "zoned": false, 00:20:45.275 "supported_io_types": { 00:20:45.275 "read": true, 00:20:45.275 "write": true, 00:20:45.275 "unmap": true, 00:20:45.275 "flush": false, 00:20:45.275 "reset": true, 00:20:45.275 "nvme_admin": false, 00:20:45.275 "nvme_io": false, 00:20:45.275 "nvme_io_md": false, 00:20:45.275 "write_zeroes": true, 00:20:45.275 "zcopy": false, 00:20:45.275 "get_zone_info": false, 00:20:45.275 "zone_management": false, 00:20:45.275 "zone_append": false, 00:20:45.275 "compare": false, 00:20:45.275 "compare_and_write": false, 00:20:45.275 "abort": false, 00:20:45.275 "seek_hole": true, 00:20:45.275 "seek_data": true, 00:20:45.275 "copy": false, 00:20:45.275 "nvme_iov_md": false 00:20:45.275 }, 00:20:45.275 "driver_specific": { 00:20:45.275 "lvol": { 00:20:45.275 "lvol_store_uuid": "4939122a-ea42-44d5-a44f-7465ef975327", 00:20:45.275 "base_bdev": "nvme0n1", 00:20:45.275 "thin_provision": true, 00:20:45.275 "num_allocated_clusters": 0, 00:20:45.275 "snapshot": false, 00:20:45.275 "clone": false, 00:20:45.275 "esnap_clone": false 00:20:45.275 } 00:20:45.275 } 00:20:45.275 } 00:20:45.275 ]' 00:20:45.275 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:45.275 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:45.275 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:45.275 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:45.275 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:45.275 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:45.275 23:01:24 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:45.275 23:01:24 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:45.533 23:01:24 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:45.533 23:01:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:45.533 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:45.533 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:45.533 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:45.533 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:45.533 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 50db3a03-e3ce-43ba-92e5-e2533fc7d65e 00:20:45.791 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:45.792 { 00:20:45.792 "name": "50db3a03-e3ce-43ba-92e5-e2533fc7d65e", 00:20:45.792 "aliases": [ 00:20:45.792 "lvs/nvme0n1p0" 00:20:45.792 ], 00:20:45.792 "product_name": "Logical Volume", 00:20:45.792 "block_size": 4096, 00:20:45.792 "num_blocks": 26476544, 00:20:45.792 "uuid": "50db3a03-e3ce-43ba-92e5-e2533fc7d65e", 00:20:45.792 "assigned_rate_limits": { 00:20:45.792 "rw_ios_per_sec": 0, 00:20:45.792 "rw_mbytes_per_sec": 0, 00:20:45.792 "r_mbytes_per_sec": 0, 00:20:45.792 "w_mbytes_per_sec": 0 00:20:45.792 }, 00:20:45.792 "claimed": false, 00:20:45.792 "zoned": false, 00:20:45.792 "supported_io_types": { 00:20:45.792 "read": true, 00:20:45.792 "write": true, 00:20:45.792 "unmap": true, 00:20:45.792 "flush": false, 00:20:45.792 "reset": true, 00:20:45.792 "nvme_admin": false, 00:20:45.792 "nvme_io": false, 00:20:45.792 "nvme_io_md": false, 00:20:45.792 "write_zeroes": true, 00:20:45.792 "zcopy": false, 00:20:45.792 "get_zone_info": false, 00:20:45.792 "zone_management": false, 00:20:45.792 "zone_append": false, 00:20:45.792 "compare": false, 00:20:45.792 "compare_and_write": false, 00:20:45.792 "abort": false, 00:20:45.792 "seek_hole": true, 00:20:45.792 "seek_data": true, 00:20:45.792 "copy": false, 00:20:45.792 "nvme_iov_md": false 00:20:45.792 }, 00:20:45.792 "driver_specific": { 00:20:45.792 "lvol": { 00:20:45.792 "lvol_store_uuid": "4939122a-ea42-44d5-a44f-7465ef975327", 00:20:45.792 "base_bdev": "nvme0n1", 00:20:45.792 "thin_provision": true, 00:20:45.792 "num_allocated_clusters": 0, 00:20:45.792 "snapshot": false, 00:20:45.792 "clone": false, 00:20:45.792 "esnap_clone": false 00:20:45.792 } 00:20:45.792 } 00:20:45.792 } 00:20:45.792 ]' 00:20:45.792 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:45.792 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:45.792 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:45.792 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:45.792 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:45.792 23:01:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:45.792 23:01:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:45.792 23:01:24 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 50db3a03-e3ce-43ba-92e5-e2533fc7d65e --l2p_dram_limit 10' 00:20:45.792 23:01:24 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:45.792 23:01:24 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:45.792 23:01:24 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:45.792 23:01:24 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:45.792 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:45.792 23:01:24 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 50db3a03-e3ce-43ba-92e5-e2533fc7d65e --l2p_dram_limit 10 -c nvc0n1p0 00:20:46.055 [2024-12-13 23:01:24.955009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.055 [2024-12-13 23:01:24.955047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:46.055 [2024-12-13 23:01:24.955060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:46.055 [2024-12-13 23:01:24.955066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.055 [2024-12-13 23:01:24.955111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.055 [2024-12-13 23:01:24.955119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.055 [2024-12-13 23:01:24.955128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:46.055 [2024-12-13 23:01:24.955134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.055 [2024-12-13 23:01:24.955153] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:46.055 [2024-12-13 23:01:24.955703] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:46.055 [2024-12-13 23:01:24.955719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.055 [2024-12-13 23:01:24.955725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.055 [2024-12-13 23:01:24.955733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:20:46.055 [2024-12-13 23:01:24.955739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.055 [2024-12-13 23:01:24.956057] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ae297e55-fec4-44cb-be54-881087e1b917 00:20:46.055 [2024-12-13 23:01:24.957027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.055 [2024-12-13 23:01:24.957060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:46.055 [2024-12-13 23:01:24.957069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:46.055 [2024-12-13 23:01:24.957079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.055 [2024-12-13 23:01:24.961859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.055 [2024-12-13 23:01:24.961887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.055 [2024-12-13 23:01:24.961896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.746 ms 00:20:46.055 [2024-12-13 23:01:24.961903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.055 [2024-12-13 23:01:24.961971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.055 [2024-12-13 23:01:24.961981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.055 [2024-12-13 23:01:24.961987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:46.056 [2024-12-13 23:01:24.961997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.056 [2024-12-13 23:01:24.962035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.056 [2024-12-13 23:01:24.962044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:46.056 [2024-12-13 23:01:24.962050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:46.056 [2024-12-13 23:01:24.962059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.056 [2024-12-13 23:01:24.962076] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:46.056 [2024-12-13 23:01:24.964996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.056 [2024-12-13 23:01:24.965020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.056 [2024-12-13 23:01:24.965030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.924 ms 00:20:46.056 [2024-12-13 23:01:24.965036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.056 [2024-12-13 23:01:24.965064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.056 [2024-12-13 23:01:24.965071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:46.056 [2024-12-13 23:01:24.965078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:46.056 [2024-12-13 23:01:24.965084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.056 [2024-12-13 23:01:24.965111] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:46.056 [2024-12-13 23:01:24.965221] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:46.056 [2024-12-13 23:01:24.965233] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:46.056 [2024-12-13 23:01:24.965240] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:46.056 [2024-12-13 23:01:24.965249] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:46.056 [2024-12-13 23:01:24.965255] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:46.056 [2024-12-13 23:01:24.965263] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:46.056 [2024-12-13 23:01:24.965268] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:46.056 [2024-12-13 23:01:24.965278] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:46.056 [2024-12-13 23:01:24.965283] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:46.056 [2024-12-13 23:01:24.965290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.056 [2024-12-13 23:01:24.965300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:46.056 [2024-12-13 23:01:24.965307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:20:46.056 [2024-12-13 23:01:24.965313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.056 [2024-12-13 23:01:24.965379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.056 [2024-12-13 23:01:24.965386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:46.056 [2024-12-13 23:01:24.965393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:46.056 [2024-12-13 23:01:24.965398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.056 [2024-12-13 23:01:24.965473] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:46.056 [2024-12-13 23:01:24.965480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:46.056 [2024-12-13 23:01:24.965487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.056 [2024-12-13 23:01:24.965493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:46.056 [2024-12-13 23:01:24.965505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:46.056 [2024-12-13 23:01:24.965516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:46.056 [2024-12-13 23:01:24.965523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.056 [2024-12-13 23:01:24.965534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:46.056 [2024-12-13 23:01:24.965539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:46.056 [2024-12-13 23:01:24.965547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:46.056 [2024-12-13 23:01:24.965551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:46.056 [2024-12-13 23:01:24.965558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:46.056 [2024-12-13 23:01:24.965563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:46.056 [2024-12-13 23:01:24.965577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:46.056 [2024-12-13 23:01:24.965583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:46.056 [2024-12-13 23:01:24.965595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.056 [2024-12-13 23:01:24.965606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:46.056 [2024-12-13 23:01:24.965611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.056 [2024-12-13 23:01:24.965622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:46.056 [2024-12-13 23:01:24.965628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.056 [2024-12-13 23:01:24.965639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:46.056 [2024-12-13 23:01:24.965644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:46.056 [2024-12-13 23:01:24.965654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:46.056 [2024-12-13 23:01:24.965662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.056 [2024-12-13 23:01:24.965673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:46.056 [2024-12-13 23:01:24.965678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:46.056 [2024-12-13 23:01:24.965684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:46.056 [2024-12-13 23:01:24.965688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:46.056 [2024-12-13 23:01:24.965696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:46.056 [2024-12-13 23:01:24.965700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:46.056 [2024-12-13 23:01:24.965711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:46.056 [2024-12-13 23:01:24.965718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965722] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:46.056 [2024-12-13 23:01:24.965729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:46.056 [2024-12-13 23:01:24.965734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:46.056 [2024-12-13 23:01:24.965740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:46.056 [2024-12-13 23:01:24.965748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:46.056 [2024-12-13 23:01:24.965772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:46.056 [2024-12-13 23:01:24.965778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:46.056 [2024-12-13 23:01:24.965785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:46.056 [2024-12-13 23:01:24.965790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:46.056 [2024-12-13 23:01:24.965797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:46.056 [2024-12-13 23:01:24.965805] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:46.056 [2024-12-13 23:01:24.965813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.056 [2024-12-13 23:01:24.965821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:46.056 [2024-12-13 23:01:24.965828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:46.056 [2024-12-13 23:01:24.965833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:46.056 [2024-12-13 23:01:24.965840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:46.056 [2024-12-13 23:01:24.965845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:46.056 [2024-12-13 23:01:24.965852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:46.056 [2024-12-13 23:01:24.965857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:46.056 [2024-12-13 23:01:24.965864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:46.056 [2024-12-13 23:01:24.965869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:46.056 [2024-12-13 23:01:24.965878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:46.056 [2024-12-13 23:01:24.965883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:46.056 [2024-12-13 23:01:24.965890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:46.056 [2024-12-13 23:01:24.965895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:46.056 [2024-12-13 23:01:24.965902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:46.056 [2024-12-13 23:01:24.965908] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:46.056 [2024-12-13 23:01:24.965915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:46.056 [2024-12-13 23:01:24.965921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:46.056 [2024-12-13 23:01:24.965928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:46.056 [2024-12-13 23:01:24.965933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:46.056 [2024-12-13 23:01:24.965940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:46.056 [2024-12-13 23:01:24.965945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.056 [2024-12-13 23:01:24.965952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:46.056 [2024-12-13 23:01:24.965957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:20:46.056 [2024-12-13 23:01:24.965964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.056 [2024-12-13 23:01:24.966005] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:46.056 [2024-12-13 23:01:24.966017] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:50.278 [2024-12-13 23:01:29.057321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.278 [2024-12-13 23:01:29.057411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:50.278 [2024-12-13 23:01:29.057429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4091.300 ms 00:20:50.278 [2024-12-13 23:01:29.057441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.278 [2024-12-13 23:01:29.089307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.278 [2024-12-13 23:01:29.089371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:50.278 [2024-12-13 23:01:29.089386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.615 ms 00:20:50.278 [2024-12-13 23:01:29.089396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.278 [2024-12-13 23:01:29.089541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.278 [2024-12-13 23:01:29.089556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:50.278 [2024-12-13 23:01:29.089565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:50.278 [2024-12-13 23:01:29.089582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.278 [2024-12-13 23:01:29.124774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.278 [2024-12-13 23:01:29.124826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:50.278 [2024-12-13 23:01:29.124838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.155 ms 00:20:50.278 [2024-12-13 23:01:29.124848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.278 [2024-12-13 23:01:29.124886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.278 [2024-12-13 23:01:29.124901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:50.278 [2024-12-13 23:01:29.124911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:50.278 [2024-12-13 23:01:29.124928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.278 [2024-12-13 23:01:29.125516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.278 [2024-12-13 23:01:29.125554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:50.278 [2024-12-13 23:01:29.125565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:20:50.278 [2024-12-13 23:01:29.125576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.278 [2024-12-13 23:01:29.125694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.278 [2024-12-13 23:01:29.125706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:50.278 [2024-12-13 23:01:29.125718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:50.278 [2024-12-13 23:01:29.125731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.278 [2024-12-13 23:01:29.143025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.278 [2024-12-13 23:01:29.143072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:50.278 [2024-12-13 23:01:29.143083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.274 ms 00:20:50.278 [2024-12-13 23:01:29.143093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.278 [2024-12-13 23:01:29.164445] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:50.278 [2024-12-13 23:01:29.168411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.278 [2024-12-13 23:01:29.168456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:50.278 [2024-12-13 23:01:29.168472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.224 ms 00:20:50.278 [2024-12-13 23:01:29.168482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.279 [2024-12-13 23:01:29.271120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.279 [2024-12-13 23:01:29.271180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:50.279 [2024-12-13 23:01:29.271198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.588 ms 00:20:50.279 [2024-12-13 23:01:29.271208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.279 [2024-12-13 23:01:29.271417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.279 [2024-12-13 23:01:29.271432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:50.279 [2024-12-13 23:01:29.271447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:20:50.279 [2024-12-13 23:01:29.271456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.279 [2024-12-13 23:01:29.297364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.279 [2024-12-13 23:01:29.297412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:50.279 [2024-12-13 23:01:29.297429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.852 ms 00:20:50.279 [2024-12-13 23:01:29.297437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.279 [2024-12-13 23:01:29.322927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.279 [2024-12-13 23:01:29.322973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:50.279 [2024-12-13 23:01:29.322988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.433 ms 00:20:50.279 [2024-12-13 23:01:29.322997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.279 [2024-12-13 23:01:29.323611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.279 [2024-12-13 23:01:29.323623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:50.279 [2024-12-13 23:01:29.323634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:20:50.279 [2024-12-13 23:01:29.323645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.279 [2024-12-13 23:01:29.405032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.279 [2024-12-13 23:01:29.405082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:50.279 [2024-12-13 23:01:29.405102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.341 ms 00:20:50.279 [2024-12-13 23:01:29.405111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.541 [2024-12-13 23:01:29.432970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.541 [2024-12-13 23:01:29.433020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:50.541 [2024-12-13 23:01:29.433037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.760 ms 00:20:50.541 [2024-12-13 23:01:29.433045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.541 [2024-12-13 23:01:29.459122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.541 [2024-12-13 23:01:29.459168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:50.541 [2024-12-13 23:01:29.459182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.019 ms 00:20:50.541 [2024-12-13 23:01:29.459190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.541 [2024-12-13 23:01:29.485356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.541 [2024-12-13 23:01:29.485406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:50.541 [2024-12-13 23:01:29.485421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.110 ms 00:20:50.541 [2024-12-13 23:01:29.485429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.541 [2024-12-13 23:01:29.485484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.541 [2024-12-13 23:01:29.485495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:50.541 [2024-12-13 23:01:29.485509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:50.541 [2024-12-13 23:01:29.485517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.541 [2024-12-13 23:01:29.485616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.541 [2024-12-13 23:01:29.485630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:50.541 [2024-12-13 23:01:29.485641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:50.541 [2024-12-13 23:01:29.485648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.541 [2024-12-13 23:01:29.487020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4531.318 ms, result 0 00:20:50.541 { 00:20:50.541 "name": "ftl0", 00:20:50.541 "uuid": "ae297e55-fec4-44cb-be54-881087e1b917" 00:20:50.541 } 00:20:50.541 23:01:29 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:50.541 23:01:29 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:50.802 23:01:29 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:50.802 23:01:29 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:50.802 [2024-12-13 23:01:29.902168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.802 [2024-12-13 23:01:29.902236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:50.802 [2024-12-13 23:01:29.902251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:50.802 [2024-12-13 23:01:29.902262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.802 [2024-12-13 23:01:29.902287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:50.802 [2024-12-13 23:01:29.905349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.802 [2024-12-13 23:01:29.905390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:50.802 [2024-12-13 23:01:29.905405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.038 ms 00:20:50.802 [2024-12-13 23:01:29.905414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.802 [2024-12-13 23:01:29.905688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.802 [2024-12-13 23:01:29.905703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:50.802 [2024-12-13 23:01:29.905714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:20:50.802 [2024-12-13 23:01:29.905722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.802 [2024-12-13 23:01:29.908977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.802 [2024-12-13 23:01:29.909001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:50.802 [2024-12-13 23:01:29.909013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:20:50.802 [2024-12-13 23:01:29.909021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.802 [2024-12-13 23:01:29.915119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.802 [2024-12-13 23:01:29.915163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:50.802 [2024-12-13 23:01:29.915180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.074 ms 00:20:50.802 [2024-12-13 23:01:29.915188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.064 [2024-12-13 23:01:29.941669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.064 [2024-12-13 23:01:29.941719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:51.064 [2024-12-13 23:01:29.941733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.394 ms 00:20:51.064 [2024-12-13 23:01:29.941741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.064 [2024-12-13 23:01:29.959630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.064 [2024-12-13 23:01:29.959677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:51.064 [2024-12-13 23:01:29.959693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.817 ms 00:20:51.064 [2024-12-13 23:01:29.959701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.064 [2024-12-13 23:01:29.959898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.064 [2024-12-13 23:01:29.959912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:51.064 [2024-12-13 23:01:29.959924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:20:51.064 [2024-12-13 23:01:29.959932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.064 [2024-12-13 23:01:29.985828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.064 [2024-12-13 23:01:29.985878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:51.064 [2024-12-13 23:01:29.985893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.869 ms 00:20:51.064 [2024-12-13 23:01:29.985900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.064 [2024-12-13 23:01:30.011213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.064 [2024-12-13 23:01:30.011261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:51.064 [2024-12-13 23:01:30.011275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.257 ms 00:20:51.064 [2024-12-13 23:01:30.011282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.064 [2024-12-13 23:01:30.036097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.064 [2024-12-13 23:01:30.036145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:51.064 [2024-12-13 23:01:30.036158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.756 ms 00:20:51.064 [2024-12-13 23:01:30.036166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.064 [2024-12-13 23:01:30.060707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.064 [2024-12-13 23:01:30.060764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:51.064 [2024-12-13 23:01:30.060780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.441 ms 00:20:51.064 [2024-12-13 23:01:30.060787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.064 [2024-12-13 23:01:30.060838] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:51.064 [2024-12-13 23:01:30.060855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.060993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:51.064 [2024-12-13 23:01:30.061160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:51.065 [2024-12-13 23:01:30.061791] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:51.065 [2024-12-13 23:01:30.061801] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae297e55-fec4-44cb-be54-881087e1b917 00:20:51.065 [2024-12-13 23:01:30.061811] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:51.065 [2024-12-13 23:01:30.061829] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:51.065 [2024-12-13 23:01:30.061841] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:51.065 [2024-12-13 23:01:30.061852] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:51.065 [2024-12-13 23:01:30.061859] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:51.065 [2024-12-13 23:01:30.061870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:51.065 [2024-12-13 23:01:30.061877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:51.065 [2024-12-13 23:01:30.061886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:51.065 [2024-12-13 23:01:30.061893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:51.065 [2024-12-13 23:01:30.061903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-12-13 23:01:30.061911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:51.065 [2024-12-13 23:01:30.061922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:20:51.065 [2024-12-13 23:01:30.061933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-12-13 23:01:30.075400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-12-13 23:01:30.075440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:51.065 [2024-12-13 23:01:30.075453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.420 ms 00:20:51.065 [2024-12-13 23:01:30.075462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-12-13 23:01:30.075906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.065 [2024-12-13 23:01:30.075920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:51.065 [2024-12-13 23:01:30.075935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:20:51.065 [2024-12-13 23:01:30.075943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-12-13 23:01:30.122496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.065 [2024-12-13 23:01:30.122549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.065 [2024-12-13 23:01:30.122563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.065 [2024-12-13 23:01:30.122571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.065 [2024-12-13 23:01:30.122652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.065 [2024-12-13 23:01:30.122661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.065 [2024-12-13 23:01:30.122675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.065 [2024-12-13 23:01:30.122684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.066 [2024-12-13 23:01:30.122797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.066 [2024-12-13 23:01:30.122809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.066 [2024-12-13 23:01:30.122820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.066 [2024-12-13 23:01:30.122828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.066 [2024-12-13 23:01:30.122853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.066 [2024-12-13 23:01:30.122862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.066 [2024-12-13 23:01:30.122872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.066 [2024-12-13 23:01:30.122883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.333 [2024-12-13 23:01:30.206398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.333 [2024-12-13 23:01:30.206456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:51.333 [2024-12-13 23:01:30.206471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.333 [2024-12-13 23:01:30.206480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.333 [2024-12-13 23:01:30.275028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.333 [2024-12-13 23:01:30.275085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:51.333 [2024-12-13 23:01:30.275098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.333 [2024-12-13 23:01:30.275111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.333 [2024-12-13 23:01:30.275220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.333 [2024-12-13 23:01:30.275231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:51.333 [2024-12-13 23:01:30.275242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.333 [2024-12-13 23:01:30.275250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.333 [2024-12-13 23:01:30.275304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.333 [2024-12-13 23:01:30.275315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:51.333 [2024-12-13 23:01:30.275326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.333 [2024-12-13 23:01:30.275334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.333 [2024-12-13 23:01:30.275445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.333 [2024-12-13 23:01:30.275456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:51.333 [2024-12-13 23:01:30.275466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.334 [2024-12-13 23:01:30.275475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.334 [2024-12-13 23:01:30.275513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.334 [2024-12-13 23:01:30.275523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:51.334 [2024-12-13 23:01:30.275534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.334 [2024-12-13 23:01:30.275543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.334 [2024-12-13 23:01:30.275591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.334 [2024-12-13 23:01:30.275601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:51.334 [2024-12-13 23:01:30.275611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.334 [2024-12-13 23:01:30.275619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.334 [2024-12-13 23:01:30.275672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.334 [2024-12-13 23:01:30.275682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:51.334 [2024-12-13 23:01:30.275693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.334 [2024-12-13 23:01:30.275701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.334 [2024-12-13 23:01:30.275905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.691 ms, result 0 00:20:51.334 true 00:20:51.334 23:01:30 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79036 00:20:51.334 23:01:30 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79036 ']' 00:20:51.334 23:01:30 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79036 00:20:51.334 23:01:30 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:51.334 23:01:30 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.334 23:01:30 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79036 00:20:51.334 23:01:30 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.334 23:01:30 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.334 killing process with pid 79036 00:20:51.334 23:01:30 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79036' 00:20:51.334 23:01:30 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79036 00:20:51.334 23:01:30 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79036 00:20:57.925 23:01:36 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:02.133 262144+0 records in 00:21:02.133 262144+0 records out 00:21:02.133 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.77852 s, 284 MB/s 00:21:02.133 23:01:40 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:04.049 23:01:42 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:04.049 [2024-12-13 23:01:42.734886] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:04.049 [2024-12-13 23:01:42.735015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79274 ] 00:21:04.049 [2024-12-13 23:01:42.897650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.049 [2024-12-13 23:01:43.009235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.311 [2024-12-13 23:01:43.295567] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:04.311 [2024-12-13 23:01:43.295655] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:04.574 [2024-12-13 23:01:43.456728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.456811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:04.574 [2024-12-13 23:01:43.456828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:04.574 [2024-12-13 23:01:43.456838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.456892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.456905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.574 [2024-12-13 23:01:43.456915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:04.574 [2024-12-13 23:01:43.456923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.456949] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:04.574 [2024-12-13 23:01:43.457630] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:04.574 [2024-12-13 23:01:43.457648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.457656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.574 [2024-12-13 23:01:43.457667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:21:04.574 [2024-12-13 23:01:43.457675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.459402] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:04.574 [2024-12-13 23:01:43.473690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.473744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:04.574 [2024-12-13 23:01:43.473765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.290 ms 00:21:04.574 [2024-12-13 23:01:43.473775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.473867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.473878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:04.574 [2024-12-13 23:01:43.473888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:04.574 [2024-12-13 23:01:43.473897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.482373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.482420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.574 [2024-12-13 23:01:43.482438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.392 ms 00:21:04.574 [2024-12-13 23:01:43.482446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.482527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.482538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.574 [2024-12-13 23:01:43.482547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:04.574 [2024-12-13 23:01:43.482555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.482602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.482612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:04.574 [2024-12-13 23:01:43.482621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:04.574 [2024-12-13 23:01:43.482632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.482654] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:04.574 [2024-12-13 23:01:43.486655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.486699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.574 [2024-12-13 23:01:43.486711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.005 ms 00:21:04.574 [2024-12-13 23:01:43.486719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.486775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.486784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:04.574 [2024-12-13 23:01:43.486794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:04.574 [2024-12-13 23:01:43.486802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.486856] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:04.574 [2024-12-13 23:01:43.486883] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:04.574 [2024-12-13 23:01:43.486925] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:04.574 [2024-12-13 23:01:43.486942] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:04.574 [2024-12-13 23:01:43.487049] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:04.574 [2024-12-13 23:01:43.487060] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:04.574 [2024-12-13 23:01:43.487070] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:04.574 [2024-12-13 23:01:43.487081] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:04.574 [2024-12-13 23:01:43.487090] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:04.574 [2024-12-13 23:01:43.487099] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:04.574 [2024-12-13 23:01:43.487107] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:04.574 [2024-12-13 23:01:43.487118] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:04.574 [2024-12-13 23:01:43.487127] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:04.574 [2024-12-13 23:01:43.487135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.487143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:04.574 [2024-12-13 23:01:43.487151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:21:04.574 [2024-12-13 23:01:43.487159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.487248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.574 [2024-12-13 23:01:43.487257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:04.574 [2024-12-13 23:01:43.487266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:04.574 [2024-12-13 23:01:43.487273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.574 [2024-12-13 23:01:43.487372] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:04.574 [2024-12-13 23:01:43.487383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:04.574 [2024-12-13 23:01:43.487391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.574 [2024-12-13 23:01:43.487400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.574 [2024-12-13 23:01:43.487408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:04.574 [2024-12-13 23:01:43.487416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:04.574 [2024-12-13 23:01:43.487423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:04.574 [2024-12-13 23:01:43.487430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:04.574 [2024-12-13 23:01:43.487437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:04.574 [2024-12-13 23:01:43.487444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.574 [2024-12-13 23:01:43.487452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:04.574 [2024-12-13 23:01:43.487460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:04.574 [2024-12-13 23:01:43.487467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.574 [2024-12-13 23:01:43.487483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:04.574 [2024-12-13 23:01:43.487490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:04.574 [2024-12-13 23:01:43.487498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.574 [2024-12-13 23:01:43.487506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:04.574 [2024-12-13 23:01:43.487513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:04.574 [2024-12-13 23:01:43.487520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.574 [2024-12-13 23:01:43.487527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:04.574 [2024-12-13 23:01:43.487534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:04.575 [2024-12-13 23:01:43.487541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.575 [2024-12-13 23:01:43.487547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:04.575 [2024-12-13 23:01:43.487553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:04.575 [2024-12-13 23:01:43.487559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.575 [2024-12-13 23:01:43.487566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:04.575 [2024-12-13 23:01:43.487572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:04.575 [2024-12-13 23:01:43.487578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.575 [2024-12-13 23:01:43.487584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:04.575 [2024-12-13 23:01:43.487591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:04.575 [2024-12-13 23:01:43.487597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.575 [2024-12-13 23:01:43.487604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:04.575 [2024-12-13 23:01:43.487611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:04.575 [2024-12-13 23:01:43.487618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.575 [2024-12-13 23:01:43.487624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:04.575 [2024-12-13 23:01:43.487631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:04.575 [2024-12-13 23:01:43.487637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.575 [2024-12-13 23:01:43.487643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:04.575 [2024-12-13 23:01:43.487649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:04.575 [2024-12-13 23:01:43.487656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.575 [2024-12-13 23:01:43.487662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:04.575 [2024-12-13 23:01:43.487669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:04.575 [2024-12-13 23:01:43.487676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.575 [2024-12-13 23:01:43.487684] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:04.575 [2024-12-13 23:01:43.487692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:04.575 [2024-12-13 23:01:43.487701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.575 [2024-12-13 23:01:43.487708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.575 [2024-12-13 23:01:43.487716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:04.575 [2024-12-13 23:01:43.487725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:04.575 [2024-12-13 23:01:43.487733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:04.575 [2024-12-13 23:01:43.487740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:04.575 [2024-12-13 23:01:43.487746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:04.575 [2024-12-13 23:01:43.487788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:04.575 [2024-12-13 23:01:43.487798] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:04.575 [2024-12-13 23:01:43.487810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.575 [2024-12-13 23:01:43.487819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:04.575 [2024-12-13 23:01:43.487827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:04.575 [2024-12-13 23:01:43.487835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:04.575 [2024-12-13 23:01:43.487842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:04.575 [2024-12-13 23:01:43.487850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:04.575 [2024-12-13 23:01:43.487857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:04.575 [2024-12-13 23:01:43.487864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:04.575 [2024-12-13 23:01:43.487872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:04.575 [2024-12-13 23:01:43.487879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:04.575 [2024-12-13 23:01:43.487886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:04.575 [2024-12-13 23:01:43.487894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:04.575 [2024-12-13 23:01:43.487902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:04.575 [2024-12-13 23:01:43.487909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:04.575 [2024-12-13 23:01:43.487917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:04.575 [2024-12-13 23:01:43.487923] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:04.575 [2024-12-13 23:01:43.487932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.575 [2024-12-13 23:01:43.487941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:04.575 [2024-12-13 23:01:43.487948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:04.575 [2024-12-13 23:01:43.487955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:04.575 [2024-12-13 23:01:43.487965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:04.575 [2024-12-13 23:01:43.487975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.575 [2024-12-13 23:01:43.487982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:04.575 [2024-12-13 23:01:43.487991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:21:04.575 [2024-12-13 23:01:43.487998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.575 [2024-12-13 23:01:43.520384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.575 [2024-12-13 23:01:43.520438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.575 [2024-12-13 23:01:43.520453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.339 ms 00:21:04.575 [2024-12-13 23:01:43.520462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.575 [2024-12-13 23:01:43.520556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.575 [2024-12-13 23:01:43.520566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:04.575 [2024-12-13 23:01:43.520574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:04.575 [2024-12-13 23:01:43.520586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.575 [2024-12-13 23:01:43.570052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.575 [2024-12-13 23:01:43.570112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.575 [2024-12-13 23:01:43.570126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.407 ms 00:21:04.575 [2024-12-13 23:01:43.570135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.575 [2024-12-13 23:01:43.570188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.575 [2024-12-13 23:01:43.570203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.575 [2024-12-13 23:01:43.570213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:04.575 [2024-12-13 23:01:43.570221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.575 [2024-12-13 23:01:43.570874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.575 [2024-12-13 23:01:43.570911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.575 [2024-12-13 23:01:43.570922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:21:04.575 [2024-12-13 23:01:43.570930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.575 [2024-12-13 23:01:43.571094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.575 [2024-12-13 23:01:43.571108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.575 [2024-12-13 23:01:43.571116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:21:04.575 [2024-12-13 23:01:43.571124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.575 [2024-12-13 23:01:43.587116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.575 [2024-12-13 23:01:43.587167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.575 [2024-12-13 23:01:43.587180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.971 ms 00:21:04.575 [2024-12-13 23:01:43.587189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.575 [2024-12-13 23:01:43.601817] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:04.575 [2024-12-13 23:01:43.601877] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:04.575 [2024-12-13 23:01:43.601891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.575 [2024-12-13 23:01:43.601899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:04.576 [2024-12-13 23:01:43.601910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.587 ms 00:21:04.576 [2024-12-13 23:01:43.601917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.576 [2024-12-13 23:01:43.628087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.576 [2024-12-13 23:01:43.628146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:04.576 [2024-12-13 23:01:43.628160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.112 ms 00:21:04.576 [2024-12-13 23:01:43.628169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.576 [2024-12-13 23:01:43.641322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.576 [2024-12-13 23:01:43.641356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:04.576 [2024-12-13 23:01:43.641366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.095 ms 00:21:04.576 [2024-12-13 23:01:43.641372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.576 [2024-12-13 23:01:43.652904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.576 [2024-12-13 23:01:43.652945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:04.576 [2024-12-13 23:01:43.652955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.499 ms 00:21:04.576 [2024-12-13 23:01:43.652963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.576 [2024-12-13 23:01:43.653547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.576 [2024-12-13 23:01:43.653567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:04.576 [2024-12-13 23:01:43.653578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:21:04.576 [2024-12-13 23:01:43.653585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.576 [2024-12-13 23:01:43.709258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.576 [2024-12-13 23:01:43.709306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:04.576 [2024-12-13 23:01:43.709323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.656 ms 00:21:04.576 [2024-12-13 23:01:43.709331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.837 [2024-12-13 23:01:43.719868] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:04.837 [2024-12-13 23:01:43.722294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.837 [2024-12-13 23:01:43.722326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:04.837 [2024-12-13 23:01:43.722338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.919 ms 00:21:04.837 [2024-12-13 23:01:43.722345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.837 [2024-12-13 23:01:43.722418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.837 [2024-12-13 23:01:43.722429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:04.837 [2024-12-13 23:01:43.722438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:04.837 [2024-12-13 23:01:43.722448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.837 [2024-12-13 23:01:43.722510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.837 [2024-12-13 23:01:43.722521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:04.837 [2024-12-13 23:01:43.722529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:04.837 [2024-12-13 23:01:43.722537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.837 [2024-12-13 23:01:43.722555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.837 [2024-12-13 23:01:43.722562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:04.837 [2024-12-13 23:01:43.722571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:04.837 [2024-12-13 23:01:43.722578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.837 [2024-12-13 23:01:43.722610] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:04.837 [2024-12-13 23:01:43.722620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.837 [2024-12-13 23:01:43.722628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:04.837 [2024-12-13 23:01:43.722635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:04.837 [2024-12-13 23:01:43.722642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.837 [2024-12-13 23:01:43.746871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.837 [2024-12-13 23:01:43.746913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:04.837 [2024-12-13 23:01:43.746925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.212 ms 00:21:04.837 [2024-12-13 23:01:43.746937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.837 [2024-12-13 23:01:43.747011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.837 [2024-12-13 23:01:43.747020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:04.837 [2024-12-13 23:01:43.747029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:04.837 [2024-12-13 23:01:43.747036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.837 [2024-12-13 23:01:43.748105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 290.900 ms, result 0 00:21:05.782  [2024-12-13T23:01:45.861Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-13T23:01:46.800Z] Copying: 52/1024 [MB] (37 MBps) [2024-12-13T23:01:48.183Z] Copying: 84/1024 [MB] (32 MBps) [2024-12-13T23:01:49.166Z] Copying: 128/1024 [MB] (43 MBps) [2024-12-13T23:01:50.127Z] Copying: 171/1024 [MB] (43 MBps) [2024-12-13T23:01:51.065Z] Copying: 195/1024 [MB] (23 MBps) [2024-12-13T23:01:52.002Z] Copying: 217/1024 [MB] (22 MBps) [2024-12-13T23:01:52.934Z] Copying: 239/1024 [MB] (22 MBps) [2024-12-13T23:01:53.869Z] Copying: 281/1024 [MB] (41 MBps) [2024-12-13T23:01:54.814Z] Copying: 318/1024 [MB] (37 MBps) [2024-12-13T23:01:56.196Z] Copying: 329/1024 [MB] (10 MBps) [2024-12-13T23:01:56.762Z] Copying: 340/1024 [MB] (11 MBps) [2024-12-13T23:01:58.136Z] Copying: 380/1024 [MB] (40 MBps) [2024-12-13T23:01:59.073Z] Copying: 425/1024 [MB] (44 MBps) [2024-12-13T23:02:00.016Z] Copying: 471/1024 [MB] (46 MBps) [2024-12-13T23:02:00.960Z] Copying: 496/1024 [MB] (25 MBps) [2024-12-13T23:02:01.900Z] Copying: 509/1024 [MB] (12 MBps) [2024-12-13T23:02:02.842Z] Copying: 529/1024 [MB] (20 MBps) [2024-12-13T23:02:03.790Z] Copying: 547/1024 [MB] (17 MBps) [2024-12-13T23:02:05.185Z] Copying: 562/1024 [MB] (15 MBps) [2024-12-13T23:02:06.123Z] Copying: 579/1024 [MB] (17 MBps) [2024-12-13T23:02:07.064Z] Copying: 598/1024 [MB] (18 MBps) [2024-12-13T23:02:08.035Z] Copying: 615/1024 [MB] (16 MBps) [2024-12-13T23:02:08.982Z] Copying: 628/1024 [MB] (12 MBps) [2024-12-13T23:02:09.917Z] Copying: 638/1024 [MB] (10 MBps) [2024-12-13T23:02:10.855Z] Copying: 656/1024 [MB] (18 MBps) [2024-12-13T23:02:11.787Z] Copying: 684/1024 [MB] (28 MBps) [2024-12-13T23:02:13.194Z] Copying: 727/1024 [MB] (42 MBps) [2024-12-13T23:02:14.131Z] Copying: 771/1024 [MB] (44 MBps) [2024-12-13T23:02:15.074Z] Copying: 815/1024 [MB] (44 MBps) [2024-12-13T23:02:16.016Z] Copying: 840/1024 [MB] (24 MBps) [2024-12-13T23:02:16.957Z] Copying: 851/1024 [MB] (10 MBps) [2024-12-13T23:02:17.912Z] Copying: 864/1024 [MB] (13 MBps) [2024-12-13T23:02:18.857Z] Copying: 876/1024 [MB] (11 MBps) [2024-12-13T23:02:19.807Z] Copying: 887/1024 [MB] (10 MBps) [2024-12-13T23:02:21.191Z] Copying: 898/1024 [MB] (11 MBps) [2024-12-13T23:02:21.767Z] Copying: 927/1024 [MB] (28 MBps) [2024-12-13T23:02:23.152Z] Copying: 949/1024 [MB] (22 MBps) [2024-12-13T23:02:24.094Z] Copying: 975/1024 [MB] (25 MBps) [2024-12-13T23:02:25.040Z] Copying: 1002/1024 [MB] (26 MBps) [2024-12-13T23:02:25.613Z] Copying: 1012/1024 [MB] (10 MBps) [2024-12-13T23:02:25.613Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-13 23:02:25.396680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.396741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:46.473 [2024-12-13 23:02:25.396775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:46.473 [2024-12-13 23:02:25.396784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.396806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:46.473 [2024-12-13 23:02:25.399890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.399929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:46.473 [2024-12-13 23:02:25.399953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.068 ms 00:21:46.473 [2024-12-13 23:02:25.399961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.402010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.402058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:46.473 [2024-12-13 23:02:25.402069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.022 ms 00:21:46.473 [2024-12-13 23:02:25.402077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.422304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.422355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:46.473 [2024-12-13 23:02:25.422367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.209 ms 00:21:46.473 [2024-12-13 23:02:25.422384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.428535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.428576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:46.473 [2024-12-13 23:02:25.428588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.107 ms 00:21:46.473 [2024-12-13 23:02:25.428595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.456482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.456534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:46.473 [2024-12-13 23:02:25.456547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.833 ms 00:21:46.473 [2024-12-13 23:02:25.456554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.472383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.472434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:46.473 [2024-12-13 23:02:25.472447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.781 ms 00:21:46.473 [2024-12-13 23:02:25.472456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.472614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.472627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:46.473 [2024-12-13 23:02:25.472637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:46.473 [2024-12-13 23:02:25.472645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.498474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.498520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:46.473 [2024-12-13 23:02:25.498531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.814 ms 00:21:46.473 [2024-12-13 23:02:25.498539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.524035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.524088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:46.473 [2024-12-13 23:02:25.524099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.442 ms 00:21:46.473 [2024-12-13 23:02:25.524106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.548904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.548952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:46.473 [2024-12-13 23:02:25.548963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.752 ms 00:21:46.473 [2024-12-13 23:02:25.548970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.574390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.473 [2024-12-13 23:02:25.574439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:46.473 [2024-12-13 23:02:25.574451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.349 ms 00:21:46.473 [2024-12-13 23:02:25.574458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.473 [2024-12-13 23:02:25.574501] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:46.473 [2024-12-13 23:02:25.574525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:46.473 [2024-12-13 23:02:25.574539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.574999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:46.474 [2024-12-13 23:02:25.575260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:46.475 [2024-12-13 23:02:25.575267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:46.475 [2024-12-13 23:02:25.575276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:46.475 [2024-12-13 23:02:25.575283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:46.475 [2024-12-13 23:02:25.575290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:46.475 [2024-12-13 23:02:25.575297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:46.475 [2024-12-13 23:02:25.575305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:46.475 [2024-12-13 23:02:25.575321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:46.475 [2024-12-13 23:02:25.575329] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae297e55-fec4-44cb-be54-881087e1b917 00:21:46.475 [2024-12-13 23:02:25.575337] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:46.475 [2024-12-13 23:02:25.575344] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:46.475 [2024-12-13 23:02:25.575351] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:46.475 [2024-12-13 23:02:25.575359] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:46.475 [2024-12-13 23:02:25.575366] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:46.475 [2024-12-13 23:02:25.575381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:46.475 [2024-12-13 23:02:25.575394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:46.475 [2024-12-13 23:02:25.575401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:46.475 [2024-12-13 23:02:25.575408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:46.475 [2024-12-13 23:02:25.575415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.475 [2024-12-13 23:02:25.575422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:46.475 [2024-12-13 23:02:25.575432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:21:46.475 [2024-12-13 23:02:25.575441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.475 [2024-12-13 23:02:25.589144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.475 [2024-12-13 23:02:25.589192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:46.475 [2024-12-13 23:02:25.589203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.683 ms 00:21:46.475 [2024-12-13 23:02:25.589210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.475 [2024-12-13 23:02:25.589614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.475 [2024-12-13 23:02:25.589633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:46.475 [2024-12-13 23:02:25.589650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:21:46.475 [2024-12-13 23:02:25.589658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.736 [2024-12-13 23:02:25.626152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.626204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:46.737 [2024-12-13 23:02:25.626215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.626223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.626286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.626295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:46.737 [2024-12-13 23:02:25.626310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.626318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.626379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.626389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:46.737 [2024-12-13 23:02:25.626398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.626405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.626421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.626429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:46.737 [2024-12-13 23:02:25.626437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.626449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.712231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.712292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:46.737 [2024-12-13 23:02:25.712307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.712315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.782573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.782630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:46.737 [2024-12-13 23:02:25.782642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.782657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.782740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.782751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:46.737 [2024-12-13 23:02:25.782784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.782794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.782836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.782846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:46.737 [2024-12-13 23:02:25.782855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.782864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.782973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.782985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:46.737 [2024-12-13 23:02:25.782993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.783002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.783037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.783047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:46.737 [2024-12-13 23:02:25.783055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.783063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.783107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.783117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:46.737 [2024-12-13 23:02:25.783125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.783133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.783179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.737 [2024-12-13 23:02:25.783190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:46.737 [2024-12-13 23:02:25.783198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.737 [2024-12-13 23:02:25.783207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.737 [2024-12-13 23:02:25.783343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 386.632 ms, result 0 00:21:47.679 00:21:47.679 00:21:47.679 23:02:26 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:47.679 [2024-12-13 23:02:26.704666] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:47.679 [2024-12-13 23:02:26.704851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79734 ] 00:21:47.940 [2024-12-13 23:02:26.872936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.940 [2024-12-13 23:02:26.976437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.200 [2024-12-13 23:02:27.272166] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:48.200 [2024-12-13 23:02:27.272260] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:48.461 [2024-12-13 23:02:27.432874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.432936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:48.461 [2024-12-13 23:02:27.432951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:48.461 [2024-12-13 23:02:27.432959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.433019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.433032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.461 [2024-12-13 23:02:27.433042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:48.461 [2024-12-13 23:02:27.433050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.433071] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:48.461 [2024-12-13 23:02:27.434124] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:48.461 [2024-12-13 23:02:27.434208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.434230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.461 [2024-12-13 23:02:27.434251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:21:48.461 [2024-12-13 23:02:27.434270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.436046] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:48.461 [2024-12-13 23:02:27.450187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.450366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:48.461 [2024-12-13 23:02:27.450568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.143 ms 00:21:48.461 [2024-12-13 23:02:27.450611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.450705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.450732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:48.461 [2024-12-13 23:02:27.450768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:48.461 [2024-12-13 23:02:27.450790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.458928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.459090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.461 [2024-12-13 23:02:27.459108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.043 ms 00:21:48.461 [2024-12-13 23:02:27.459124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.459206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.459216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.461 [2024-12-13 23:02:27.459225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:48.461 [2024-12-13 23:02:27.459233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.459279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.459290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:48.461 [2024-12-13 23:02:27.459299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:48.461 [2024-12-13 23:02:27.459307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.459335] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:48.461 [2024-12-13 23:02:27.463317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.463357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.461 [2024-12-13 23:02:27.463371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.988 ms 00:21:48.461 [2024-12-13 23:02:27.463380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.463420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.463429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:48.461 [2024-12-13 23:02:27.463439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:48.461 [2024-12-13 23:02:27.463447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.463500] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:48.461 [2024-12-13 23:02:27.463525] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:48.461 [2024-12-13 23:02:27.463563] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:48.461 [2024-12-13 23:02:27.463582] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:48.461 [2024-12-13 23:02:27.463689] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:48.461 [2024-12-13 23:02:27.463701] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:48.461 [2024-12-13 23:02:27.463712] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:48.461 [2024-12-13 23:02:27.463722] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:48.461 [2024-12-13 23:02:27.463732] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:48.461 [2024-12-13 23:02:27.463741] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:48.461 [2024-12-13 23:02:27.463797] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:48.461 [2024-12-13 23:02:27.463808] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:48.461 [2024-12-13 23:02:27.463819] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:48.461 [2024-12-13 23:02:27.463828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.463836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:48.461 [2024-12-13 23:02:27.463846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:21:48.461 [2024-12-13 23:02:27.463854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.463938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.461 [2024-12-13 23:02:27.463947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:48.461 [2024-12-13 23:02:27.463956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:48.461 [2024-12-13 23:02:27.463964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.461 [2024-12-13 23:02:27.464067] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:48.461 [2024-12-13 23:02:27.464079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:48.461 [2024-12-13 23:02:27.464087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.461 [2024-12-13 23:02:27.464095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.461 [2024-12-13 23:02:27.464103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:48.461 [2024-12-13 23:02:27.464110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:48.461 [2024-12-13 23:02:27.464117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:48.461 [2024-12-13 23:02:27.464126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:48.461 [2024-12-13 23:02:27.464134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:48.461 [2024-12-13 23:02:27.464141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.461 [2024-12-13 23:02:27.464148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:48.461 [2024-12-13 23:02:27.464155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:48.461 [2024-12-13 23:02:27.464161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.461 [2024-12-13 23:02:27.464176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:48.461 [2024-12-13 23:02:27.464187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:48.461 [2024-12-13 23:02:27.464194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.461 [2024-12-13 23:02:27.464201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:48.461 [2024-12-13 23:02:27.464208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:48.461 [2024-12-13 23:02:27.464215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.462 [2024-12-13 23:02:27.464223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:48.462 [2024-12-13 23:02:27.464230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:48.462 [2024-12-13 23:02:27.464238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.462 [2024-12-13 23:02:27.464245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:48.462 [2024-12-13 23:02:27.464252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:48.462 [2024-12-13 23:02:27.464258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.462 [2024-12-13 23:02:27.464265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:48.462 [2024-12-13 23:02:27.464272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:48.462 [2024-12-13 23:02:27.464279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.462 [2024-12-13 23:02:27.464286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:48.462 [2024-12-13 23:02:27.464294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:48.462 [2024-12-13 23:02:27.464300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.462 [2024-12-13 23:02:27.464307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:48.462 [2024-12-13 23:02:27.464314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:48.462 [2024-12-13 23:02:27.464320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.462 [2024-12-13 23:02:27.464327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:48.462 [2024-12-13 23:02:27.464333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:48.462 [2024-12-13 23:02:27.464339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.462 [2024-12-13 23:02:27.464347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:48.462 [2024-12-13 23:02:27.464353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:48.462 [2024-12-13 23:02:27.464359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.462 [2024-12-13 23:02:27.464366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:48.462 [2024-12-13 23:02:27.464373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:48.462 [2024-12-13 23:02:27.464380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.462 [2024-12-13 23:02:27.464387] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:48.462 [2024-12-13 23:02:27.464395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:48.462 [2024-12-13 23:02:27.464403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.462 [2024-12-13 23:02:27.464412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.462 [2024-12-13 23:02:27.464421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:48.462 [2024-12-13 23:02:27.464428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:48.462 [2024-12-13 23:02:27.464434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:48.462 [2024-12-13 23:02:27.464442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:48.462 [2024-12-13 23:02:27.464449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:48.462 [2024-12-13 23:02:27.464456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:48.462 [2024-12-13 23:02:27.464465] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:48.462 [2024-12-13 23:02:27.464474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.462 [2024-12-13 23:02:27.464485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:48.462 [2024-12-13 23:02:27.464492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:48.462 [2024-12-13 23:02:27.464500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:48.462 [2024-12-13 23:02:27.464507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:48.462 [2024-12-13 23:02:27.464514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:48.462 [2024-12-13 23:02:27.464521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:48.462 [2024-12-13 23:02:27.464528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:48.462 [2024-12-13 23:02:27.464535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:48.462 [2024-12-13 23:02:27.464542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:48.462 [2024-12-13 23:02:27.464549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:48.462 [2024-12-13 23:02:27.464556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:48.462 [2024-12-13 23:02:27.464563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:48.462 [2024-12-13 23:02:27.464570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:48.462 [2024-12-13 23:02:27.464578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:48.462 [2024-12-13 23:02:27.464585] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:48.462 [2024-12-13 23:02:27.464593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.462 [2024-12-13 23:02:27.464602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:48.462 [2024-12-13 23:02:27.464609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:48.462 [2024-12-13 23:02:27.464616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:48.462 [2024-12-13 23:02:27.464623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:48.462 [2024-12-13 23:02:27.464630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.462 [2024-12-13 23:02:27.464638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:48.462 [2024-12-13 23:02:27.464646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:21:48.462 [2024-12-13 23:02:27.464656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.462 [2024-12-13 23:02:27.497005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.462 [2024-12-13 23:02:27.497197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.462 [2024-12-13 23:02:27.497268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.303 ms 00:21:48.462 [2024-12-13 23:02:27.497302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.462 [2024-12-13 23:02:27.497404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.462 [2024-12-13 23:02:27.497428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:48.462 [2024-12-13 23:02:27.497450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:48.462 [2024-12-13 23:02:27.497517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.462 [2024-12-13 23:02:27.545878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.462 [2024-12-13 23:02:27.546086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.462 [2024-12-13 23:02:27.546167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.274 ms 00:21:48.462 [2024-12-13 23:02:27.546193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.462 [2024-12-13 23:02:27.546256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.462 [2024-12-13 23:02:27.546283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:48.462 [2024-12-13 23:02:27.546311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:48.462 [2024-12-13 23:02:27.546331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.462 [2024-12-13 23:02:27.546972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.462 [2024-12-13 23:02:27.547056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:48.462 [2024-12-13 23:02:27.547080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:21:48.462 [2024-12-13 23:02:27.547185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.462 [2024-12-13 23:02:27.547369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.462 [2024-12-13 23:02:27.547403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:48.462 [2024-12-13 23:02:27.547482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:21:48.462 [2024-12-13 23:02:27.547505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.462 [2024-12-13 23:02:27.563407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.462 [2024-12-13 23:02:27.563565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:48.462 [2024-12-13 23:02:27.563625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.865 ms 00:21:48.462 [2024-12-13 23:02:27.563648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.462 [2024-12-13 23:02:27.578150] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:48.462 [2024-12-13 23:02:27.578333] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:48.462 [2024-12-13 23:02:27.578400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.462 [2024-12-13 23:02:27.578422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:48.462 [2024-12-13 23:02:27.578442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.594 ms 00:21:48.462 [2024-12-13 23:02:27.578461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.604505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.604678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:48.724 [2024-12-13 23:02:27.604741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.990 ms 00:21:48.724 [2024-12-13 23:02:27.604786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.617899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.618059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:48.724 [2024-12-13 23:02:27.618117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.040 ms 00:21:48.724 [2024-12-13 23:02:27.618139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.630701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.630883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:48.724 [2024-12-13 23:02:27.630945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.486 ms 00:21:48.724 [2024-12-13 23:02:27.630968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.632033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.632214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:48.724 [2024-12-13 23:02:27.632335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:21:48.724 [2024-12-13 23:02:27.632360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.698100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.698338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:48.724 [2024-12-13 23:02:27.698417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.692 ms 00:21:48.724 [2024-12-13 23:02:27.698431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.709658] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:48.724 [2024-12-13 23:02:27.713147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.713194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:48.724 [2024-12-13 23:02:27.713209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.671 ms 00:21:48.724 [2024-12-13 23:02:27.713218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.713312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.713325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:48.724 [2024-12-13 23:02:27.713335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:48.724 [2024-12-13 23:02:27.713347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.713421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.713433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:48.724 [2024-12-13 23:02:27.713443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:48.724 [2024-12-13 23:02:27.713452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.713473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.713482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:48.724 [2024-12-13 23:02:27.713491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:48.724 [2024-12-13 23:02:27.713499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.713540] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:48.724 [2024-12-13 23:02:27.713552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.713561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:48.724 [2024-12-13 23:02:27.713570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:48.724 [2024-12-13 23:02:27.713578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.739595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.739644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:48.724 [2024-12-13 23:02:27.739664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.996 ms 00:21:48.724 [2024-12-13 23:02:27.739672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.724 [2024-12-13 23:02:27.739811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.724 [2024-12-13 23:02:27.739824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:48.724 [2024-12-13 23:02:27.739835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:21:48.725 [2024-12-13 23:02:27.739844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.725 [2024-12-13 23:02:27.741212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.831 ms, result 0 00:21:50.117  [2024-12-13T23:02:30.199Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-13T23:02:31.144Z] Copying: 36/1024 [MB] (15 MBps) [2024-12-13T23:02:32.097Z] Copying: 59/1024 [MB] (22 MBps) [2024-12-13T23:02:33.047Z] Copying: 75/1024 [MB] (15 MBps) [2024-12-13T23:02:33.992Z] Copying: 88/1024 [MB] (13 MBps) [2024-12-13T23:02:35.030Z] Copying: 99/1024 [MB] (10 MBps) [2024-12-13T23:02:35.979Z] Copying: 109/1024 [MB] (10 MBps) [2024-12-13T23:02:37.367Z] Copying: 120/1024 [MB] (10 MBps) [2024-12-13T23:02:37.939Z] Copying: 134/1024 [MB] (14 MBps) [2024-12-13T23:02:39.324Z] Copying: 149/1024 [MB] (15 MBps) [2024-12-13T23:02:40.266Z] Copying: 163/1024 [MB] (13 MBps) [2024-12-13T23:02:41.210Z] Copying: 179/1024 [MB] (15 MBps) [2024-12-13T23:02:42.153Z] Copying: 190/1024 [MB] (10 MBps) [2024-12-13T23:02:43.093Z] Copying: 202/1024 [MB] (12 MBps) [2024-12-13T23:02:44.031Z] Copying: 212/1024 [MB] (10 MBps) [2024-12-13T23:02:44.972Z] Copying: 223/1024 [MB] (10 MBps) [2024-12-13T23:02:46.351Z] Copying: 243/1024 [MB] (19 MBps) [2024-12-13T23:02:47.289Z] Copying: 261/1024 [MB] (18 MBps) [2024-12-13T23:02:48.228Z] Copying: 286/1024 [MB] (24 MBps) [2024-12-13T23:02:49.169Z] Copying: 310/1024 [MB] (23 MBps) [2024-12-13T23:02:50.118Z] Copying: 331/1024 [MB] (21 MBps) [2024-12-13T23:02:51.061Z] Copying: 351/1024 [MB] (20 MBps) [2024-12-13T23:02:52.014Z] Copying: 371/1024 [MB] (19 MBps) [2024-12-13T23:02:52.958Z] Copying: 392/1024 [MB] (21 MBps) [2024-12-13T23:02:54.355Z] Copying: 410/1024 [MB] (17 MBps) [2024-12-13T23:02:55.298Z] Copying: 428/1024 [MB] (18 MBps) [2024-12-13T23:02:56.240Z] Copying: 448/1024 [MB] (20 MBps) [2024-12-13T23:02:57.195Z] Copying: 469/1024 [MB] (20 MBps) [2024-12-13T23:02:58.187Z] Copying: 489/1024 [MB] (20 MBps) [2024-12-13T23:02:59.134Z] Copying: 507/1024 [MB] (17 MBps) [2024-12-13T23:03:00.075Z] Copying: 526/1024 [MB] (19 MBps) [2024-12-13T23:03:01.019Z] Copying: 545/1024 [MB] (19 MBps) [2024-12-13T23:03:01.959Z] Copying: 558/1024 [MB] (12 MBps) [2024-12-13T23:03:03.345Z] Copying: 575/1024 [MB] (16 MBps) [2024-12-13T23:03:04.288Z] Copying: 589/1024 [MB] (13 MBps) [2024-12-13T23:03:05.230Z] Copying: 601/1024 [MB] (11 MBps) [2024-12-13T23:03:06.175Z] Copying: 611/1024 [MB] (10 MBps) [2024-12-13T23:03:07.116Z] Copying: 622/1024 [MB] (10 MBps) [2024-12-13T23:03:08.060Z] Copying: 634/1024 [MB] (12 MBps) [2024-12-13T23:03:09.007Z] Copying: 645/1024 [MB] (10 MBps) [2024-12-13T23:03:09.962Z] Copying: 657/1024 [MB] (11 MBps) [2024-12-13T23:03:11.353Z] Copying: 667/1024 [MB] (10 MBps) [2024-12-13T23:03:11.935Z] Copying: 677/1024 [MB] (10 MBps) [2024-12-13T23:03:13.327Z] Copying: 689/1024 [MB] (11 MBps) [2024-12-13T23:03:14.274Z] Copying: 700/1024 [MB] (10 MBps) [2024-12-13T23:03:15.227Z] Copying: 710/1024 [MB] (10 MBps) [2024-12-13T23:03:16.172Z] Copying: 724/1024 [MB] (13 MBps) [2024-12-13T23:03:17.112Z] Copying: 745/1024 [MB] (20 MBps) [2024-12-13T23:03:18.054Z] Copying: 757/1024 [MB] (12 MBps) [2024-12-13T23:03:18.997Z] Copying: 773/1024 [MB] (15 MBps) [2024-12-13T23:03:19.938Z] Copying: 790/1024 [MB] (17 MBps) [2024-12-13T23:03:21.377Z] Copying: 812/1024 [MB] (21 MBps) [2024-12-13T23:03:21.949Z] Copying: 824/1024 [MB] (11 MBps) [2024-12-13T23:03:23.338Z] Copying: 840/1024 [MB] (16 MBps) [2024-12-13T23:03:24.282Z] Copying: 853/1024 [MB] (12 MBps) [2024-12-13T23:03:25.234Z] Copying: 863/1024 [MB] (10 MBps) [2024-12-13T23:03:26.179Z] Copying: 878/1024 [MB] (15 MBps) [2024-12-13T23:03:27.120Z] Copying: 898/1024 [MB] (19 MBps) [2024-12-13T23:03:28.075Z] Copying: 911/1024 [MB] (13 MBps) [2024-12-13T23:03:29.020Z] Copying: 935/1024 [MB] (23 MBps) [2024-12-13T23:03:29.967Z] Copying: 952/1024 [MB] (16 MBps) [2024-12-13T23:03:31.354Z] Copying: 963/1024 [MB] (11 MBps) [2024-12-13T23:03:32.295Z] Copying: 981/1024 [MB] (18 MBps) [2024-12-13T23:03:33.240Z] Copying: 1004/1024 [MB] (22 MBps) [2024-12-13T23:03:33.240Z] Copying: 1020/1024 [MB] (16 MBps) [2024-12-13T23:03:33.502Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-13 23:03:33.404032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.362 [2024-12-13 23:03:33.404120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:54.362 [2024-12-13 23:03:33.404137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:54.362 [2024-12-13 23:03:33.404146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.362 [2024-12-13 23:03:33.404171] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:54.362 [2024-12-13 23:03:33.407257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.362 [2024-12-13 23:03:33.407464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:54.362 [2024-12-13 23:03:33.407487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.070 ms 00:22:54.362 [2024-12-13 23:03:33.407496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.362 [2024-12-13 23:03:33.407773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.362 [2024-12-13 23:03:33.407786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:54.362 [2024-12-13 23:03:33.407797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:22:54.362 [2024-12-13 23:03:33.407805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.362 [2024-12-13 23:03:33.411249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.362 [2024-12-13 23:03:33.411272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:54.362 [2024-12-13 23:03:33.411281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.429 ms 00:22:54.362 [2024-12-13 23:03:33.411294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.362 [2024-12-13 23:03:33.418492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.362 [2024-12-13 23:03:33.418655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:54.362 [2024-12-13 23:03:33.418675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.181 ms 00:22:54.362 [2024-12-13 23:03:33.418683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.362 [2024-12-13 23:03:33.447788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.362 [2024-12-13 23:03:33.447842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:54.362 [2024-12-13 23:03:33.447855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.038 ms 00:22:54.362 [2024-12-13 23:03:33.447864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.362 [2024-12-13 23:03:33.463282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.362 [2024-12-13 23:03:33.463332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:54.362 [2024-12-13 23:03:33.463346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.369 ms 00:22:54.362 [2024-12-13 23:03:33.463354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.362 [2024-12-13 23:03:33.463514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.362 [2024-12-13 23:03:33.463526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:54.362 [2024-12-13 23:03:33.463536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:54.362 [2024-12-13 23:03:33.463544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.362 [2024-12-13 23:03:33.489437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.362 [2024-12-13 23:03:33.489484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:54.362 [2024-12-13 23:03:33.489496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.877 ms 00:22:54.362 [2024-12-13 23:03:33.489503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.625 [2024-12-13 23:03:33.515085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.625 [2024-12-13 23:03:33.515130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:54.625 [2024-12-13 23:03:33.515142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.538 ms 00:22:54.625 [2024-12-13 23:03:33.515150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.625 [2024-12-13 23:03:33.539764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.625 [2024-12-13 23:03:33.539811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:54.625 [2024-12-13 23:03:33.539824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.562 ms 00:22:54.625 [2024-12-13 23:03:33.539831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.625 [2024-12-13 23:03:33.564318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.625 [2024-12-13 23:03:33.564365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:54.625 [2024-12-13 23:03:33.564376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.412 ms 00:22:54.625 [2024-12-13 23:03:33.564383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.625 [2024-12-13 23:03:33.564427] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:54.625 [2024-12-13 23:03:33.564450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:54.625 [2024-12-13 23:03:33.564953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.564961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.564969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.564978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.564987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.564996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:54.626 [2024-12-13 23:03:33.565310] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:54.626 [2024-12-13 23:03:33.565318] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae297e55-fec4-44cb-be54-881087e1b917 00:22:54.626 [2024-12-13 23:03:33.565327] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:54.626 [2024-12-13 23:03:33.565335] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:54.626 [2024-12-13 23:03:33.565342] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:54.626 [2024-12-13 23:03:33.565351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:54.626 [2024-12-13 23:03:33.565365] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:54.626 [2024-12-13 23:03:33.565374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:54.626 [2024-12-13 23:03:33.565382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:54.626 [2024-12-13 23:03:33.565389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:54.626 [2024-12-13 23:03:33.565395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:54.626 [2024-12-13 23:03:33.565403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.626 [2024-12-13 23:03:33.565410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:54.626 [2024-12-13 23:03:33.565420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:22:54.626 [2024-12-13 23:03:33.565430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.626 [2024-12-13 23:03:33.579100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.626 [2024-12-13 23:03:33.579143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:54.626 [2024-12-13 23:03:33.579155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.651 ms 00:22:54.626 [2024-12-13 23:03:33.579163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.626 [2024-12-13 23:03:33.579560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.626 [2024-12-13 23:03:33.579570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:54.626 [2024-12-13 23:03:33.579587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:22:54.626 [2024-12-13 23:03:33.579595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.626 [2024-12-13 23:03:33.616318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.626 [2024-12-13 23:03:33.616366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.626 [2024-12-13 23:03:33.616379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.626 [2024-12-13 23:03:33.616389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.626 [2024-12-13 23:03:33.616452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.626 [2024-12-13 23:03:33.616462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.626 [2024-12-13 23:03:33.616477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.626 [2024-12-13 23:03:33.616486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.626 [2024-12-13 23:03:33.616577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.626 [2024-12-13 23:03:33.616589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.626 [2024-12-13 23:03:33.616599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.626 [2024-12-13 23:03:33.616607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.626 [2024-12-13 23:03:33.616624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.626 [2024-12-13 23:03:33.616633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.626 [2024-12-13 23:03:33.616643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.626 [2024-12-13 23:03:33.616653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.626 [2024-12-13 23:03:33.713444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.626 [2024-12-13 23:03:33.713518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.626 [2024-12-13 23:03:33.713536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.627 [2024-12-13 23:03:33.713547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.888 [2024-12-13 23:03:33.788141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.888 [2024-12-13 23:03:33.788210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.888 [2024-12-13 23:03:33.788233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.888 [2024-12-13 23:03:33.788242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.888 [2024-12-13 23:03:33.788332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.888 [2024-12-13 23:03:33.788345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.888 [2024-12-13 23:03:33.788355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.888 [2024-12-13 23:03:33.788363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.888 [2024-12-13 23:03:33.788436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.888 [2024-12-13 23:03:33.788450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.888 [2024-12-13 23:03:33.788461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.888 [2024-12-13 23:03:33.788470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.888 [2024-12-13 23:03:33.788584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.888 [2024-12-13 23:03:33.788598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.888 [2024-12-13 23:03:33.788608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.888 [2024-12-13 23:03:33.788618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.888 [2024-12-13 23:03:33.788657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.888 [2024-12-13 23:03:33.788668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:54.888 [2024-12-13 23:03:33.788678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.888 [2024-12-13 23:03:33.788688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.888 [2024-12-13 23:03:33.788747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.888 [2024-12-13 23:03:33.788786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.888 [2024-12-13 23:03:33.788796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.888 [2024-12-13 23:03:33.788805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.888 [2024-12-13 23:03:33.788867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.888 [2024-12-13 23:03:33.788881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.888 [2024-12-13 23:03:33.788892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.888 [2024-12-13 23:03:33.788901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.888 [2024-12-13 23:03:33.789068] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.996 ms, result 0 00:22:55.460 00:22:55.460 00:22:55.460 23:03:34 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:58.005 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:58.005 23:03:36 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:58.005 [2024-12-13 23:03:36.640750] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:58.005 [2024-12-13 23:03:36.640879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80447 ] 00:22:58.005 [2024-12-13 23:03:36.801853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.005 [2024-12-13 23:03:36.931133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.267 [2024-12-13 23:03:37.271513] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:58.267 [2024-12-13 23:03:37.271611] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:58.528 [2024-12-13 23:03:37.436170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.528 [2024-12-13 23:03:37.436236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:58.528 [2024-12-13 23:03:37.436252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:58.528 [2024-12-13 23:03:37.436261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.528 [2024-12-13 23:03:37.436321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.528 [2024-12-13 23:03:37.436334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:58.528 [2024-12-13 23:03:37.436344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:58.528 [2024-12-13 23:03:37.436352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.528 [2024-12-13 23:03:37.436373] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:58.528 [2024-12-13 23:03:37.437152] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:58.528 [2024-12-13 23:03:37.437173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.528 [2024-12-13 23:03:37.437182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:58.528 [2024-12-13 23:03:37.437191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:22:58.528 [2024-12-13 23:03:37.437199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.528 [2024-12-13 23:03:37.439031] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:58.528 [2024-12-13 23:03:37.453601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.528 [2024-12-13 23:03:37.453654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:58.528 [2024-12-13 23:03:37.453668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.571 ms 00:22:58.528 [2024-12-13 23:03:37.453677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.528 [2024-12-13 23:03:37.453783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.528 [2024-12-13 23:03:37.453795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:58.528 [2024-12-13 23:03:37.453805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:58.528 [2024-12-13 23:03:37.453815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.528 [2024-12-13 23:03:37.462282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.528 [2024-12-13 23:03:37.462330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:58.528 [2024-12-13 23:03:37.462341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.381 ms 00:22:58.528 [2024-12-13 23:03:37.462357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.528 [2024-12-13 23:03:37.462438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.528 [2024-12-13 23:03:37.462448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:58.528 [2024-12-13 23:03:37.462458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:58.528 [2024-12-13 23:03:37.462465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.528 [2024-12-13 23:03:37.462513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.528 [2024-12-13 23:03:37.462523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:58.528 [2024-12-13 23:03:37.462531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:58.528 [2024-12-13 23:03:37.462539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.528 [2024-12-13 23:03:37.462567] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:58.528 [2024-12-13 23:03:37.466787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.528 [2024-12-13 23:03:37.466830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:58.528 [2024-12-13 23:03:37.466844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.226 ms 00:22:58.528 [2024-12-13 23:03:37.466852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.528 [2024-12-13 23:03:37.466890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.528 [2024-12-13 23:03:37.466900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:58.528 [2024-12-13 23:03:37.466909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:58.528 [2024-12-13 23:03:37.466918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.528 [2024-12-13 23:03:37.466971] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:58.528 [2024-12-13 23:03:37.466996] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:58.528 [2024-12-13 23:03:37.467034] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:58.528 [2024-12-13 23:03:37.467053] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:58.528 [2024-12-13 23:03:37.467160] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:58.528 [2024-12-13 23:03:37.467172] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:58.529 [2024-12-13 23:03:37.467183] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:58.529 [2024-12-13 23:03:37.467194] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:58.529 [2024-12-13 23:03:37.467204] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:58.529 [2024-12-13 23:03:37.467213] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:58.529 [2024-12-13 23:03:37.467221] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:58.529 [2024-12-13 23:03:37.467229] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:58.529 [2024-12-13 23:03:37.467240] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:58.529 [2024-12-13 23:03:37.467249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.529 [2024-12-13 23:03:37.467257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:58.529 [2024-12-13 23:03:37.467265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:22:58.529 [2024-12-13 23:03:37.467272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.529 [2024-12-13 23:03:37.467356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.529 [2024-12-13 23:03:37.467365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:58.529 [2024-12-13 23:03:37.467373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:58.529 [2024-12-13 23:03:37.467381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.529 [2024-12-13 23:03:37.467485] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:58.529 [2024-12-13 23:03:37.467497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:58.529 [2024-12-13 23:03:37.467506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:58.529 [2024-12-13 23:03:37.467514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:58.529 [2024-12-13 23:03:37.467529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:58.529 [2024-12-13 23:03:37.467543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:58.529 [2024-12-13 23:03:37.467551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:58.529 [2024-12-13 23:03:37.467564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:58.529 [2024-12-13 23:03:37.467571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:58.529 [2024-12-13 23:03:37.467577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:58.529 [2024-12-13 23:03:37.467592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:58.529 [2024-12-13 23:03:37.467599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:58.529 [2024-12-13 23:03:37.467606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:58.529 [2024-12-13 23:03:37.467622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:58.529 [2024-12-13 23:03:37.467630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:58.529 [2024-12-13 23:03:37.467644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.529 [2024-12-13 23:03:37.467656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:58.529 [2024-12-13 23:03:37.467663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.529 [2024-12-13 23:03:37.467676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:58.529 [2024-12-13 23:03:37.467683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.529 [2024-12-13 23:03:37.467696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:58.529 [2024-12-13 23:03:37.467703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.529 [2024-12-13 23:03:37.467715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:58.529 [2024-12-13 23:03:37.467721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:58.529 [2024-12-13 23:03:37.467774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:58.529 [2024-12-13 23:03:37.467783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:58.529 [2024-12-13 23:03:37.467790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:58.529 [2024-12-13 23:03:37.467797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:58.529 [2024-12-13 23:03:37.467804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:58.529 [2024-12-13 23:03:37.467812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:58.529 [2024-12-13 23:03:37.467827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:58.529 [2024-12-13 23:03:37.467834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467841] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:58.529 [2024-12-13 23:03:37.467849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:58.529 [2024-12-13 23:03:37.467857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:58.529 [2024-12-13 23:03:37.467865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.529 [2024-12-13 23:03:37.467873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:58.529 [2024-12-13 23:03:37.467881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:58.529 [2024-12-13 23:03:37.467889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:58.529 [2024-12-13 23:03:37.467896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:58.529 [2024-12-13 23:03:37.467904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:58.529 [2024-12-13 23:03:37.467911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:58.529 [2024-12-13 23:03:37.467920] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:58.529 [2024-12-13 23:03:37.467930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.529 [2024-12-13 23:03:37.467941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:58.529 [2024-12-13 23:03:37.467949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:58.529 [2024-12-13 23:03:37.467957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:58.529 [2024-12-13 23:03:37.467965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:58.529 [2024-12-13 23:03:37.467972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:58.529 [2024-12-13 23:03:37.467979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:58.529 [2024-12-13 23:03:37.467986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:58.529 [2024-12-13 23:03:37.467993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:58.529 [2024-12-13 23:03:37.468001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:58.529 [2024-12-13 23:03:37.468008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:58.529 [2024-12-13 23:03:37.468017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:58.529 [2024-12-13 23:03:37.468025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:58.529 [2024-12-13 23:03:37.468032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:58.529 [2024-12-13 23:03:37.468039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:58.529 [2024-12-13 23:03:37.468046] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:58.529 [2024-12-13 23:03:37.468055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.529 [2024-12-13 23:03:37.468062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:58.529 [2024-12-13 23:03:37.468069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:58.529 [2024-12-13 23:03:37.468076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:58.529 [2024-12-13 23:03:37.468083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:58.529 [2024-12-13 23:03:37.468090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.529 [2024-12-13 23:03:37.468098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:58.529 [2024-12-13 23:03:37.468107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:22:58.529 [2024-12-13 23:03:37.468115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.529 [2024-12-13 23:03:37.500602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.529 [2024-12-13 23:03:37.500653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.529 [2024-12-13 23:03:37.500665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.442 ms 00:22:58.529 [2024-12-13 23:03:37.500678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.529 [2024-12-13 23:03:37.500785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.529 [2024-12-13 23:03:37.500796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:58.529 [2024-12-13 23:03:37.500805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:58.529 [2024-12-13 23:03:37.500813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.530 [2024-12-13 23:03:37.545552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.530 [2024-12-13 23:03:37.545802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.530 [2024-12-13 23:03:37.545826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.658 ms 00:22:58.530 [2024-12-13 23:03:37.545836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.530 [2024-12-13 23:03:37.545891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.530 [2024-12-13 23:03:37.545902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:58.530 [2024-12-13 23:03:37.545917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:58.530 [2024-12-13 23:03:37.545926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.530 [2024-12-13 23:03:37.546535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.530 [2024-12-13 23:03:37.546571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:58.530 [2024-12-13 23:03:37.546583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:22:58.530 [2024-12-13 23:03:37.546592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.530 [2024-12-13 23:03:37.546746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.530 [2024-12-13 23:03:37.546792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:58.530 [2024-12-13 23:03:37.546806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:22:58.530 [2024-12-13 23:03:37.546814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.530 [2024-12-13 23:03:37.562754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.530 [2024-12-13 23:03:37.562821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:58.530 [2024-12-13 23:03:37.562833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.918 ms 00:22:58.530 [2024-12-13 23:03:37.562842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.530 [2024-12-13 23:03:37.577236] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:58.530 [2024-12-13 23:03:37.577440] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:58.530 [2024-12-13 23:03:37.577463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.530 [2024-12-13 23:03:37.577472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:58.530 [2024-12-13 23:03:37.577483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.509 ms 00:22:58.530 [2024-12-13 23:03:37.577492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.530 [2024-12-13 23:03:37.603530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.530 [2024-12-13 23:03:37.603583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:58.530 [2024-12-13 23:03:37.603597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.992 ms 00:22:58.530 [2024-12-13 23:03:37.603606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.530 [2024-12-13 23:03:37.616524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.530 [2024-12-13 23:03:37.616573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:58.530 [2024-12-13 23:03:37.616585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.849 ms 00:22:58.530 [2024-12-13 23:03:37.616593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.530 [2024-12-13 23:03:37.628999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.530 [2024-12-13 23:03:37.629046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:58.530 [2024-12-13 23:03:37.629059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.356 ms 00:22:58.530 [2024-12-13 23:03:37.629066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.530 [2024-12-13 23:03:37.629706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.530 [2024-12-13 23:03:37.629731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:58.530 [2024-12-13 23:03:37.629745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:22:58.530 [2024-12-13 23:03:37.629771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.792 [2024-12-13 23:03:37.695286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.792 [2024-12-13 23:03:37.695538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:58.792 [2024-12-13 23:03:37.695572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.493 ms 00:22:58.792 [2024-12-13 23:03:37.695582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.792 [2024-12-13 23:03:37.707043] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:58.792 [2024-12-13 23:03:37.710225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.792 [2024-12-13 23:03:37.710407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:58.792 [2024-12-13 23:03:37.710428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.510 ms 00:22:58.792 [2024-12-13 23:03:37.710437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.792 [2024-12-13 23:03:37.710535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.792 [2024-12-13 23:03:37.710548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:58.792 [2024-12-13 23:03:37.710558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:58.792 [2024-12-13 23:03:37.710569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.792 [2024-12-13 23:03:37.710644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.792 [2024-12-13 23:03:37.710654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:58.792 [2024-12-13 23:03:37.710663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:58.792 [2024-12-13 23:03:37.710672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.792 [2024-12-13 23:03:37.710694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.792 [2024-12-13 23:03:37.710704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:58.792 [2024-12-13 23:03:37.710712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:58.792 [2024-12-13 23:03:37.710720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.792 [2024-12-13 23:03:37.710784] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:58.792 [2024-12-13 23:03:37.710796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.792 [2024-12-13 23:03:37.710804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:58.792 [2024-12-13 23:03:37.710813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:58.792 [2024-12-13 23:03:37.710822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.792 [2024-12-13 23:03:37.736923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.792 [2024-12-13 23:03:37.737105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:58.792 [2024-12-13 23:03:37.737137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.082 ms 00:22:58.792 [2024-12-13 23:03:37.737146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.792 [2024-12-13 23:03:37.737227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.792 [2024-12-13 23:03:37.737237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:58.792 [2024-12-13 23:03:37.737246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:58.792 [2024-12-13 23:03:37.737255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.792 [2024-12-13 23:03:37.738567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.912 ms, result 0 00:22:59.738  [2024-12-13T23:03:39.824Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-13T23:03:40.767Z] Copying: 28192/1048576 [kB] (10096 kBps) [2024-12-13T23:03:42.157Z] Copying: 38/1024 [MB] (10 MBps) [2024-12-13T23:03:43.119Z] Copying: 49/1024 [MB] (11 MBps) [2024-12-13T23:03:44.089Z] Copying: 60/1024 [MB] (11 MBps) [2024-12-13T23:03:45.033Z] Copying: 71/1024 [MB] (10 MBps) [2024-12-13T23:03:45.972Z] Copying: 82/1024 [MB] (11 MBps) [2024-12-13T23:03:46.919Z] Copying: 94/1024 [MB] (11 MBps) [2024-12-13T23:03:47.865Z] Copying: 105/1024 [MB] (11 MBps) [2024-12-13T23:03:48.808Z] Copying: 116/1024 [MB] (11 MBps) [2024-12-13T23:03:49.757Z] Copying: 127/1024 [MB] (10 MBps) [2024-12-13T23:03:51.144Z] Copying: 137/1024 [MB] (10 MBps) [2024-12-13T23:03:52.086Z] Copying: 148/1024 [MB] (10 MBps) [2024-12-13T23:03:53.031Z] Copying: 160/1024 [MB] (11 MBps) [2024-12-13T23:03:53.983Z] Copying: 171/1024 [MB] (11 MBps) [2024-12-13T23:03:54.928Z] Copying: 182/1024 [MB] (11 MBps) [2024-12-13T23:03:55.874Z] Copying: 200/1024 [MB] (17 MBps) [2024-12-13T23:03:56.820Z] Copying: 213/1024 [MB] (13 MBps) [2024-12-13T23:03:57.764Z] Copying: 227/1024 [MB] (13 MBps) [2024-12-13T23:03:59.151Z] Copying: 237/1024 [MB] (10 MBps) [2024-12-13T23:04:00.094Z] Copying: 248/1024 [MB] (10 MBps) [2024-12-13T23:04:01.041Z] Copying: 264112/1048576 [kB] (10056 kBps) [2024-12-13T23:04:01.985Z] Copying: 268/1024 [MB] (10 MBps) [2024-12-13T23:04:02.930Z] Copying: 278/1024 [MB] (10 MBps) [2024-12-13T23:04:03.883Z] Copying: 290/1024 [MB] (11 MBps) [2024-12-13T23:04:04.838Z] Copying: 301/1024 [MB] (10 MBps) [2024-12-13T23:04:05.780Z] Copying: 313/1024 [MB] (12 MBps) [2024-12-13T23:04:07.197Z] Copying: 323/1024 [MB] (10 MBps) [2024-12-13T23:04:07.796Z] Copying: 335/1024 [MB] (12 MBps) [2024-12-13T23:04:09.189Z] Copying: 347/1024 [MB] (11 MBps) [2024-12-13T23:04:09.761Z] Copying: 359/1024 [MB] (11 MBps) [2024-12-13T23:04:11.145Z] Copying: 370/1024 [MB] (11 MBps) [2024-12-13T23:04:12.090Z] Copying: 394/1024 [MB] (23 MBps) [2024-12-13T23:04:13.035Z] Copying: 406/1024 [MB] (12 MBps) [2024-12-13T23:04:13.982Z] Copying: 417/1024 [MB] (11 MBps) [2024-12-13T23:04:14.928Z] Copying: 432/1024 [MB] (14 MBps) [2024-12-13T23:04:15.873Z] Copying: 446/1024 [MB] (14 MBps) [2024-12-13T23:04:16.820Z] Copying: 458/1024 [MB] (11 MBps) [2024-12-13T23:04:17.765Z] Copying: 479400/1048576 [kB] (9968 kBps) [2024-12-13T23:04:19.150Z] Copying: 489392/1048576 [kB] (9992 kBps) [2024-12-13T23:04:20.094Z] Copying: 497/1024 [MB] (19 MBps) [2024-12-13T23:04:21.040Z] Copying: 523/1024 [MB] (26 MBps) [2024-12-13T23:04:21.983Z] Copying: 539/1024 [MB] (15 MBps) [2024-12-13T23:04:22.928Z] Copying: 552/1024 [MB] (13 MBps) [2024-12-13T23:04:23.871Z] Copying: 563/1024 [MB] (10 MBps) [2024-12-13T23:04:24.807Z] Copying: 583/1024 [MB] (19 MBps) [2024-12-13T23:04:26.193Z] Copying: 617/1024 [MB] (34 MBps) [2024-12-13T23:04:26.766Z] Copying: 636/1024 [MB] (18 MBps) [2024-12-13T23:04:28.150Z] Copying: 656/1024 [MB] (19 MBps) [2024-12-13T23:04:29.094Z] Copying: 674/1024 [MB] (18 MBps) [2024-12-13T23:04:30.052Z] Copying: 688/1024 [MB] (13 MBps) [2024-12-13T23:04:31.076Z] Copying: 699/1024 [MB] (11 MBps) [2024-12-13T23:04:32.020Z] Copying: 709/1024 [MB] (10 MBps) [2024-12-13T23:04:32.961Z] Copying: 720/1024 [MB] (10 MBps) [2024-12-13T23:04:33.905Z] Copying: 731/1024 [MB] (11 MBps) [2024-12-13T23:04:34.850Z] Copying: 743/1024 [MB] (11 MBps) [2024-12-13T23:04:35.800Z] Copying: 754/1024 [MB] (11 MBps) [2024-12-13T23:04:37.187Z] Copying: 765/1024 [MB] (11 MBps) [2024-12-13T23:04:37.758Z] Copying: 776/1024 [MB] (10 MBps) [2024-12-13T23:04:39.145Z] Copying: 788/1024 [MB] (11 MBps) [2024-12-13T23:04:40.086Z] Copying: 799/1024 [MB] (11 MBps) [2024-12-13T23:04:41.029Z] Copying: 810/1024 [MB] (10 MBps) [2024-12-13T23:04:41.970Z] Copying: 820/1024 [MB] (10 MBps) [2024-12-13T23:04:42.912Z] Copying: 831/1024 [MB] (11 MBps) [2024-12-13T23:04:43.856Z] Copying: 843/1024 [MB] (11 MBps) [2024-12-13T23:04:44.808Z] Copying: 854/1024 [MB] (11 MBps) [2024-12-13T23:04:46.191Z] Copying: 866/1024 [MB] (11 MBps) [2024-12-13T23:04:46.762Z] Copying: 877/1024 [MB] (11 MBps) [2024-12-13T23:04:48.149Z] Copying: 888/1024 [MB] (11 MBps) [2024-12-13T23:04:49.093Z] Copying: 900/1024 [MB] (11 MBps) [2024-12-13T23:04:50.036Z] Copying: 911/1024 [MB] (11 MBps) [2024-12-13T23:04:50.979Z] Copying: 923/1024 [MB] (11 MBps) [2024-12-13T23:04:51.921Z] Copying: 935/1024 [MB] (12 MBps) [2024-12-13T23:04:52.864Z] Copying: 947/1024 [MB] (11 MBps) [2024-12-13T23:04:53.880Z] Copying: 959/1024 [MB] (12 MBps) [2024-12-13T23:04:54.848Z] Copying: 971/1024 [MB] (11 MBps) [2024-12-13T23:04:55.792Z] Copying: 983/1024 [MB] (11 MBps) [2024-12-13T23:04:57.181Z] Copying: 994/1024 [MB] (11 MBps) [2024-12-13T23:04:57.753Z] Copying: 1006/1024 [MB] (11 MBps) [2024-12-13T23:04:59.148Z] Copying: 1017/1024 [MB] (10 MBps) [2024-12-13T23:04:59.148Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-13 23:04:58.756266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.008 [2024-12-13 23:04:58.756330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:20.008 [2024-12-13 23:04:58.756348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:20.008 [2024-12-13 23:04:58.756355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.008 [2024-12-13 23:04:58.756564] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:20.008 [2024-12-13 23:04:58.758830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.008 [2024-12-13 23:04:58.758879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:20.009 [2024-12-13 23:04:58.758887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.250 ms 00:24:20.009 [2024-12-13 23:04:58.758893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.009 [2024-12-13 23:04:58.768366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.009 [2024-12-13 23:04:58.768393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:20.009 [2024-12-13 23:04:58.768401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.969 ms 00:24:20.009 [2024-12-13 23:04:58.768411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.009 [2024-12-13 23:04:58.787676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.009 [2024-12-13 23:04:58.787703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:20.009 [2024-12-13 23:04:58.787712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.252 ms 00:24:20.009 [2024-12-13 23:04:58.787718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.009 [2024-12-13 23:04:58.792373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.009 [2024-12-13 23:04:58.792396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:20.009 [2024-12-13 23:04:58.792403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.636 ms 00:24:20.009 [2024-12-13 23:04:58.792413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.009 [2024-12-13 23:04:58.811906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.009 [2024-12-13 23:04:58.811933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:20.009 [2024-12-13 23:04:58.811942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.468 ms 00:24:20.009 [2024-12-13 23:04:58.811948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.009 [2024-12-13 23:04:58.824227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.009 [2024-12-13 23:04:58.824253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:20.009 [2024-12-13 23:04:58.824262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.253 ms 00:24:20.009 [2024-12-13 23:04:58.824269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.009 [2024-12-13 23:04:59.026289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.009 [2024-12-13 23:04:59.026317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:20.009 [2024-12-13 23:04:59.026326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 201.991 ms 00:24:20.009 [2024-12-13 23:04:59.026332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.009 [2024-12-13 23:04:59.045018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.009 [2024-12-13 23:04:59.045043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:20.009 [2024-12-13 23:04:59.045051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.674 ms 00:24:20.009 [2024-12-13 23:04:59.045057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.009 [2024-12-13 23:04:59.063186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.009 [2024-12-13 23:04:59.063209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:20.009 [2024-12-13 23:04:59.063217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.104 ms 00:24:20.009 [2024-12-13 23:04:59.063223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.009 [2024-12-13 23:04:59.080638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.009 [2024-12-13 23:04:59.080661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:20.009 [2024-12-13 23:04:59.080669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.390 ms 00:24:20.009 [2024-12-13 23:04:59.080675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.009 [2024-12-13 23:04:59.098277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.009 [2024-12-13 23:04:59.098438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:20.009 [2024-12-13 23:04:59.098450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.559 ms 00:24:20.009 [2024-12-13 23:04:59.098456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.009 [2024-12-13 23:04:59.098478] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:20.009 [2024-12-13 23:04:59.098491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 80896 / 261120 wr_cnt: 1 state: open 00:24:20.009 [2024-12-13 23:04:59.098500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:20.009 [2024-12-13 23:04:59.098837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.098998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:20.010 [2024-12-13 23:04:59.099110] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:20.010 [2024-12-13 23:04:59.099117] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae297e55-fec4-44cb-be54-881087e1b917 00:24:20.010 [2024-12-13 23:04:59.099123] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 80896 00:24:20.010 [2024-12-13 23:04:59.099129] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 81856 00:24:20.010 [2024-12-13 23:04:59.099134] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 80896 00:24:20.010 [2024-12-13 23:04:59.099141] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0119 00:24:20.010 [2024-12-13 23:04:59.099196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:20.010 [2024-12-13 23:04:59.099203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:20.010 [2024-12-13 23:04:59.099208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:20.010 [2024-12-13 23:04:59.099213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:20.010 [2024-12-13 23:04:59.099218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:20.010 [2024-12-13 23:04:59.099224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.010 [2024-12-13 23:04:59.099230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:20.010 [2024-12-13 23:04:59.099237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:24:20.010 [2024-12-13 23:04:59.099244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.010 [2024-12-13 23:04:59.109251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.010 [2024-12-13 23:04:59.109355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:20.010 [2024-12-13 23:04:59.109372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.994 ms 00:24:20.010 [2024-12-13 23:04:59.109378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.010 [2024-12-13 23:04:59.109658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.010 [2024-12-13 23:04:59.109667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:20.010 [2024-12-13 23:04:59.109674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:24:20.010 [2024-12-13 23:04:59.109680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.010 [2024-12-13 23:04:59.137162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.010 [2024-12-13 23:04:59.137189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:20.010 [2024-12-13 23:04:59.137197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.010 [2024-12-13 23:04:59.137204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.010 [2024-12-13 23:04:59.137251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.010 [2024-12-13 23:04:59.137258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:20.010 [2024-12-13 23:04:59.137264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.010 [2024-12-13 23:04:59.137270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.010 [2024-12-13 23:04:59.137315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.010 [2024-12-13 23:04:59.137323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:20.010 [2024-12-13 23:04:59.137334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.010 [2024-12-13 23:04:59.137340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.010 [2024-12-13 23:04:59.137352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.010 [2024-12-13 23:04:59.137359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:20.010 [2024-12-13 23:04:59.137365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.010 [2024-12-13 23:04:59.137371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.272 [2024-12-13 23:04:59.200410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.272 [2024-12-13 23:04:59.200537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:20.272 [2024-12-13 23:04:59.200552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.272 [2024-12-13 23:04:59.200558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.272 [2024-12-13 23:04:59.252257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.272 [2024-12-13 23:04:59.252294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:20.272 [2024-12-13 23:04:59.252304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.272 [2024-12-13 23:04:59.252311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.272 [2024-12-13 23:04:59.252379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.272 [2024-12-13 23:04:59.252387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:20.272 [2024-12-13 23:04:59.252393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.272 [2024-12-13 23:04:59.252404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.272 [2024-12-13 23:04:59.252433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.272 [2024-12-13 23:04:59.252441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:20.272 [2024-12-13 23:04:59.252448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.272 [2024-12-13 23:04:59.252454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.272 [2024-12-13 23:04:59.252528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.272 [2024-12-13 23:04:59.252541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:20.272 [2024-12-13 23:04:59.252547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.272 [2024-12-13 23:04:59.252556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.272 [2024-12-13 23:04:59.252583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.272 [2024-12-13 23:04:59.252590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:20.272 [2024-12-13 23:04:59.252597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.272 [2024-12-13 23:04:59.252603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.272 [2024-12-13 23:04:59.252638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.272 [2024-12-13 23:04:59.252645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:20.272 [2024-12-13 23:04:59.252652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.272 [2024-12-13 23:04:59.252658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.272 [2024-12-13 23:04:59.252698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.272 [2024-12-13 23:04:59.252707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:20.272 [2024-12-13 23:04:59.252713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.272 [2024-12-13 23:04:59.252719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.272 [2024-12-13 23:04:59.252840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 498.039 ms, result 0 00:24:21.657 00:24:21.657 00:24:21.657 23:05:00 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:21.657 [2024-12-13 23:05:00.526124] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:21.657 [2024-12-13 23:05:00.526244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81302 ] 00:24:21.657 [2024-12-13 23:05:00.681881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.657 [2024-12-13 23:05:00.767115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.917 [2024-12-13 23:05:00.998815] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:21.917 [2024-12-13 23:05:00.998873] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:22.178 [2024-12-13 23:05:01.154676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.178 [2024-12-13 23:05:01.154718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:22.178 [2024-12-13 23:05:01.154730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:22.178 [2024-12-13 23:05:01.154736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.178 [2024-12-13 23:05:01.154786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.178 [2024-12-13 23:05:01.154797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.178 [2024-12-13 23:05:01.154804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:22.179 [2024-12-13 23:05:01.154810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.179 [2024-12-13 23:05:01.154824] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:22.179 [2024-12-13 23:05:01.155387] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:22.179 [2024-12-13 23:05:01.155400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.179 [2024-12-13 23:05:01.155406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.179 [2024-12-13 23:05:01.155413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:24:22.179 [2024-12-13 23:05:01.155419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.179 [2024-12-13 23:05:01.156665] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:22.179 [2024-12-13 23:05:01.167170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.179 [2024-12-13 23:05:01.167199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:22.179 [2024-12-13 23:05:01.167208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.506 ms 00:24:22.179 [2024-12-13 23:05:01.167215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.179 [2024-12-13 23:05:01.167266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.179 [2024-12-13 23:05:01.167273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:22.179 [2024-12-13 23:05:01.167280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:22.179 [2024-12-13 23:05:01.167286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.179 [2024-12-13 23:05:01.173524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.179 [2024-12-13 23:05:01.173551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.179 [2024-12-13 23:05:01.173559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.197 ms 00:24:22.179 [2024-12-13 23:05:01.173568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.179 [2024-12-13 23:05:01.173625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.179 [2024-12-13 23:05:01.173631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.179 [2024-12-13 23:05:01.173638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:22.179 [2024-12-13 23:05:01.173644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.179 [2024-12-13 23:05:01.173680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.179 [2024-12-13 23:05:01.173688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:22.179 [2024-12-13 23:05:01.173696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:22.179 [2024-12-13 23:05:01.173702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.179 [2024-12-13 23:05:01.173718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:22.179 [2024-12-13 23:05:01.176856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.179 [2024-12-13 23:05:01.176881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.179 [2024-12-13 23:05:01.176890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:24:22.179 [2024-12-13 23:05:01.176896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.179 [2024-12-13 23:05:01.176927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.179 [2024-12-13 23:05:01.176934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:22.179 [2024-12-13 23:05:01.176941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:22.179 [2024-12-13 23:05:01.176947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.179 [2024-12-13 23:05:01.176961] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:22.179 [2024-12-13 23:05:01.176979] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:22.179 [2024-12-13 23:05:01.177007] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:22.179 [2024-12-13 23:05:01.177023] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:22.179 [2024-12-13 23:05:01.177106] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:22.179 [2024-12-13 23:05:01.177114] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:22.179 [2024-12-13 23:05:01.177123] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:22.179 [2024-12-13 23:05:01.177131] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:22.179 [2024-12-13 23:05:01.177140] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:22.179 [2024-12-13 23:05:01.177146] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:22.179 [2024-12-13 23:05:01.177152] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:22.179 [2024-12-13 23:05:01.177158] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:22.179 [2024-12-13 23:05:01.177166] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:22.179 [2024-12-13 23:05:01.177173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.179 [2024-12-13 23:05:01.177179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:22.179 [2024-12-13 23:05:01.177185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:24:22.179 [2024-12-13 23:05:01.177191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.179 [2024-12-13 23:05:01.177255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.179 [2024-12-13 23:05:01.177267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:22.179 [2024-12-13 23:05:01.177274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:22.179 [2024-12-13 23:05:01.177280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.179 [2024-12-13 23:05:01.177355] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:22.179 [2024-12-13 23:05:01.177364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:22.179 [2024-12-13 23:05:01.177371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.179 [2024-12-13 23:05:01.177378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:22.179 [2024-12-13 23:05:01.177390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:22.179 [2024-12-13 23:05:01.177402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:22.179 [2024-12-13 23:05:01.177407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.179 [2024-12-13 23:05:01.177418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:22.179 [2024-12-13 23:05:01.177427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:22.179 [2024-12-13 23:05:01.177433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.179 [2024-12-13 23:05:01.177444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:22.179 [2024-12-13 23:05:01.177450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:22.179 [2024-12-13 23:05:01.177455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:22.179 [2024-12-13 23:05:01.177468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:22.179 [2024-12-13 23:05:01.177474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:22.179 [2024-12-13 23:05:01.177485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.179 [2024-12-13 23:05:01.177495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:22.179 [2024-12-13 23:05:01.177501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.179 [2024-12-13 23:05:01.177511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:22.179 [2024-12-13 23:05:01.177517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.179 [2024-12-13 23:05:01.177527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:22.179 [2024-12-13 23:05:01.177532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.179 [2024-12-13 23:05:01.177542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:22.179 [2024-12-13 23:05:01.177547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.179 [2024-12-13 23:05:01.177558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:22.179 [2024-12-13 23:05:01.177565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:22.179 [2024-12-13 23:05:01.177570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.179 [2024-12-13 23:05:01.177576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:22.179 [2024-12-13 23:05:01.177581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:22.179 [2024-12-13 23:05:01.177585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:22.179 [2024-12-13 23:05:01.177606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:22.179 [2024-12-13 23:05:01.177612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177618] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:22.179 [2024-12-13 23:05:01.177626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:22.179 [2024-12-13 23:05:01.177632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.179 [2024-12-13 23:05:01.177638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.179 [2024-12-13 23:05:01.177644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:22.179 [2024-12-13 23:05:01.177649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:22.180 [2024-12-13 23:05:01.177656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:22.180 [2024-12-13 23:05:01.177662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:22.180 [2024-12-13 23:05:01.177667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:22.180 [2024-12-13 23:05:01.177672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:22.180 [2024-12-13 23:05:01.177679] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:22.180 [2024-12-13 23:05:01.177686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.180 [2024-12-13 23:05:01.177695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:22.180 [2024-12-13 23:05:01.177701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:22.180 [2024-12-13 23:05:01.177706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:22.180 [2024-12-13 23:05:01.177712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:22.180 [2024-12-13 23:05:01.177719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:22.180 [2024-12-13 23:05:01.177724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:22.180 [2024-12-13 23:05:01.177729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:22.180 [2024-12-13 23:05:01.177735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:22.180 [2024-12-13 23:05:01.177740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:22.180 [2024-12-13 23:05:01.177746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:22.180 [2024-12-13 23:05:01.177751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:22.180 [2024-12-13 23:05:01.177767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:22.180 [2024-12-13 23:05:01.177773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:22.180 [2024-12-13 23:05:01.177778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:22.180 [2024-12-13 23:05:01.177784] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:22.180 [2024-12-13 23:05:01.177790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.180 [2024-12-13 23:05:01.177796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:22.180 [2024-12-13 23:05:01.177802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:22.180 [2024-12-13 23:05:01.177807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:22.180 [2024-12-13 23:05:01.177813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:22.180 [2024-12-13 23:05:01.177819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.177825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:22.180 [2024-12-13 23:05:01.177831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:24:22.180 [2024-12-13 23:05:01.177837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.201981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.202009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.180 [2024-12-13 23:05:01.202018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.099 ms 00:24:22.180 [2024-12-13 23:05:01.202027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.202094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.202100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:22.180 [2024-12-13 23:05:01.202106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:22.180 [2024-12-13 23:05:01.202112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.245549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.245582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.180 [2024-12-13 23:05:01.245596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.397 ms 00:24:22.180 [2024-12-13 23:05:01.245603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.245637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.245646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.180 [2024-12-13 23:05:01.245656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:22.180 [2024-12-13 23:05:01.245662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.246100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.246114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.180 [2024-12-13 23:05:01.246122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:24:22.180 [2024-12-13 23:05:01.246128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.246238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.246252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.180 [2024-12-13 23:05:01.246258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:24:22.180 [2024-12-13 23:05:01.246266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.258126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.258149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.180 [2024-12-13 23:05:01.258159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.844 ms 00:24:22.180 [2024-12-13 23:05:01.258165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.268972] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:22.180 [2024-12-13 23:05:01.269000] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:22.180 [2024-12-13 23:05:01.269010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.269016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:22.180 [2024-12-13 23:05:01.269024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.754 ms 00:24:22.180 [2024-12-13 23:05:01.269030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.287650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.287677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:22.180 [2024-12-13 23:05:01.287686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.590 ms 00:24:22.180 [2024-12-13 23:05:01.287693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.296914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.296938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:22.180 [2024-12-13 23:05:01.296946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.191 ms 00:24:22.180 [2024-12-13 23:05:01.296952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.306005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.306029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:22.180 [2024-12-13 23:05:01.306036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.029 ms 00:24:22.180 [2024-12-13 23:05:01.306043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.180 [2024-12-13 23:05:01.306501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.180 [2024-12-13 23:05:01.306512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:22.180 [2024-12-13 23:05:01.306521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:24:22.180 [2024-12-13 23:05:01.306527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.441 [2024-12-13 23:05:01.354955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.441 [2024-12-13 23:05:01.354990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:22.441 [2024-12-13 23:05:01.355005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.414 ms 00:24:22.441 [2024-12-13 23:05:01.355012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.441 [2024-12-13 23:05:01.362996] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:22.441 [2024-12-13 23:05:01.365159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.441 [2024-12-13 23:05:01.365182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:22.441 [2024-12-13 23:05:01.365192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.113 ms 00:24:22.441 [2024-12-13 23:05:01.365200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.441 [2024-12-13 23:05:01.365256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.441 [2024-12-13 23:05:01.365264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:22.441 [2024-12-13 23:05:01.365272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:22.441 [2024-12-13 23:05:01.365280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.441 [2024-12-13 23:05:01.366461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.441 [2024-12-13 23:05:01.366486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:22.442 [2024-12-13 23:05:01.366494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:24:22.442 [2024-12-13 23:05:01.366501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.442 [2024-12-13 23:05:01.366522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.442 [2024-12-13 23:05:01.366529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:22.442 [2024-12-13 23:05:01.366536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:22.442 [2024-12-13 23:05:01.366542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.442 [2024-12-13 23:05:01.366575] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:22.442 [2024-12-13 23:05:01.366584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.442 [2024-12-13 23:05:01.366590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:22.442 [2024-12-13 23:05:01.366597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:22.442 [2024-12-13 23:05:01.366603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.442 [2024-12-13 23:05:01.385205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.442 [2024-12-13 23:05:01.385376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:22.442 [2024-12-13 23:05:01.385395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.588 ms 00:24:22.442 [2024-12-13 23:05:01.385402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.442 [2024-12-13 23:05:01.385456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.442 [2024-12-13 23:05:01.385465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:22.442 [2024-12-13 23:05:01.385472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:22.442 [2024-12-13 23:05:01.385478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.442 [2024-12-13 23:05:01.386372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 231.324 ms, result 0 00:24:23.839  [2024-12-13T23:05:03.552Z] Copying: 8248/1048576 [kB] (8248 kBps) [2024-12-13T23:05:04.938Z] Copying: 18/1024 [MB] (10 MBps) [2024-12-13T23:05:05.886Z] Copying: 30/1024 [MB] (11 MBps) [2024-12-13T23:05:06.830Z] Copying: 41/1024 [MB] (11 MBps) [2024-12-13T23:05:07.774Z] Copying: 52/1024 [MB] (11 MBps) [2024-12-13T23:05:08.717Z] Copying: 64/1024 [MB] (12 MBps) [2024-12-13T23:05:09.662Z] Copying: 76/1024 [MB] (12 MBps) [2024-12-13T23:05:10.605Z] Copying: 88/1024 [MB] (11 MBps) [2024-12-13T23:05:11.555Z] Copying: 100/1024 [MB] (11 MBps) [2024-12-13T23:05:12.945Z] Copying: 112/1024 [MB] (12 MBps) [2024-12-13T23:05:13.887Z] Copying: 124/1024 [MB] (11 MBps) [2024-12-13T23:05:14.831Z] Copying: 136/1024 [MB] (11 MBps) [2024-12-13T23:05:15.773Z] Copying: 148/1024 [MB] (11 MBps) [2024-12-13T23:05:16.764Z] Copying: 160/1024 [MB] (11 MBps) [2024-12-13T23:05:17.730Z] Copying: 171/1024 [MB] (11 MBps) [2024-12-13T23:05:18.675Z] Copying: 183/1024 [MB] (11 MBps) [2024-12-13T23:05:19.618Z] Copying: 194/1024 [MB] (11 MBps) [2024-12-13T23:05:20.560Z] Copying: 206/1024 [MB] (11 MBps) [2024-12-13T23:05:21.952Z] Copying: 217/1024 [MB] (11 MBps) [2024-12-13T23:05:22.896Z] Copying: 228/1024 [MB] (11 MBps) [2024-12-13T23:05:23.839Z] Copying: 239/1024 [MB] (10 MBps) [2024-12-13T23:05:24.781Z] Copying: 251/1024 [MB] (11 MBps) [2024-12-13T23:05:25.719Z] Copying: 262/1024 [MB] (11 MBps) [2024-12-13T23:05:26.664Z] Copying: 273/1024 [MB] (10 MBps) [2024-12-13T23:05:27.607Z] Copying: 283/1024 [MB] (10 MBps) [2024-12-13T23:05:28.548Z] Copying: 295/1024 [MB] (11 MBps) [2024-12-13T23:05:29.582Z] Copying: 306/1024 [MB] (11 MBps) [2024-12-13T23:05:30.553Z] Copying: 322/1024 [MB] (15 MBps) [2024-12-13T23:05:31.944Z] Copying: 332/1024 [MB] (10 MBps) [2024-12-13T23:05:32.890Z] Copying: 344/1024 [MB] (11 MBps) [2024-12-13T23:05:33.836Z] Copying: 356/1024 [MB] (12 MBps) [2024-12-13T23:05:34.795Z] Copying: 367/1024 [MB] (10 MBps) [2024-12-13T23:05:35.736Z] Copying: 377/1024 [MB] (10 MBps) [2024-12-13T23:05:36.680Z] Copying: 388/1024 [MB] (10 MBps) [2024-12-13T23:05:37.623Z] Copying: 398/1024 [MB] (10 MBps) [2024-12-13T23:05:38.567Z] Copying: 410/1024 [MB] (11 MBps) [2024-12-13T23:05:39.951Z] Copying: 422/1024 [MB] (11 MBps) [2024-12-13T23:05:40.894Z] Copying: 434/1024 [MB] (11 MBps) [2024-12-13T23:05:41.837Z] Copying: 446/1024 [MB] (11 MBps) [2024-12-13T23:05:42.779Z] Copying: 457/1024 [MB] (11 MBps) [2024-12-13T23:05:43.724Z] Copying: 468/1024 [MB] (10 MBps) [2024-12-13T23:05:44.675Z] Copying: 479/1024 [MB] (10 MBps) [2024-12-13T23:05:45.618Z] Copying: 491/1024 [MB] (11 MBps) [2024-12-13T23:05:46.563Z] Copying: 503/1024 [MB] (11 MBps) [2024-12-13T23:05:47.947Z] Copying: 514/1024 [MB] (11 MBps) [2024-12-13T23:05:48.895Z] Copying: 525/1024 [MB] (10 MBps) [2024-12-13T23:05:49.842Z] Copying: 545/1024 [MB] (19 MBps) [2024-12-13T23:05:50.797Z] Copying: 555/1024 [MB] (10 MBps) [2024-12-13T23:05:51.745Z] Copying: 565/1024 [MB] (10 MBps) [2024-12-13T23:05:52.691Z] Copying: 577/1024 [MB] (12 MBps) [2024-12-13T23:05:53.636Z] Copying: 589/1024 [MB] (11 MBps) [2024-12-13T23:05:54.580Z] Copying: 601/1024 [MB] (12 MBps) [2024-12-13T23:05:55.965Z] Copying: 613/1024 [MB] (11 MBps) [2024-12-13T23:05:56.536Z] Copying: 624/1024 [MB] (11 MBps) [2024-12-13T23:05:57.924Z] Copying: 634/1024 [MB] (10 MBps) [2024-12-13T23:05:58.869Z] Copying: 646/1024 [MB] (11 MBps) [2024-12-13T23:05:59.815Z] Copying: 657/1024 [MB] (11 MBps) [2024-12-13T23:06:00.757Z] Copying: 668/1024 [MB] (10 MBps) [2024-12-13T23:06:01.700Z] Copying: 679/1024 [MB] (11 MBps) [2024-12-13T23:06:02.644Z] Copying: 690/1024 [MB] (10 MBps) [2024-12-13T23:06:03.590Z] Copying: 701/1024 [MB] (11 MBps) [2024-12-13T23:06:04.541Z] Copying: 712/1024 [MB] (10 MBps) [2024-12-13T23:06:06.012Z] Copying: 722/1024 [MB] (10 MBps) [2024-12-13T23:06:06.587Z] Copying: 733/1024 [MB] (11 MBps) [2024-12-13T23:06:07.975Z] Copying: 761/1024 [MB] (27 MBps) [2024-12-13T23:06:08.552Z] Copying: 782/1024 [MB] (20 MBps) [2024-12-13T23:06:09.959Z] Copying: 804/1024 [MB] (21 MBps) [2024-12-13T23:06:10.905Z] Copying: 823/1024 [MB] (19 MBps) [2024-12-13T23:06:11.858Z] Copying: 839/1024 [MB] (15 MBps) [2024-12-13T23:06:12.804Z] Copying: 860/1024 [MB] (21 MBps) [2024-12-13T23:06:13.750Z] Copying: 876/1024 [MB] (15 MBps) [2024-12-13T23:06:14.694Z] Copying: 896/1024 [MB] (20 MBps) [2024-12-13T23:06:15.640Z] Copying: 919/1024 [MB] (23 MBps) [2024-12-13T23:06:16.584Z] Copying: 941/1024 [MB] (21 MBps) [2024-12-13T23:06:17.970Z] Copying: 964/1024 [MB] (23 MBps) [2024-12-13T23:06:18.543Z] Copying: 987/1024 [MB] (22 MBps) [2024-12-13T23:06:19.931Z] Copying: 1003/1024 [MB] (16 MBps) [2024-12-13T23:06:20.503Z] Copying: 1015/1024 [MB] (11 MBps) [2024-12-13T23:06:20.503Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-13 23:06:20.454778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.363 [2024-12-13 23:06:20.454862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:41.363 [2024-12-13 23:06:20.454880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:41.363 [2024-12-13 23:06:20.454894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.363 [2024-12-13 23:06:20.454917] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:41.363 [2024-12-13 23:06:20.458696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.363 [2024-12-13 23:06:20.458740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:41.363 [2024-12-13 23:06:20.458753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.760 ms 00:25:41.363 [2024-12-13 23:06:20.458771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.363 [2024-12-13 23:06:20.458997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.363 [2024-12-13 23:06:20.459009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:41.363 [2024-12-13 23:06:20.459020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:25:41.363 [2024-12-13 23:06:20.459034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.363 [2024-12-13 23:06:20.466944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.363 [2024-12-13 23:06:20.466991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:41.363 [2024-12-13 23:06:20.467003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.893 ms 00:25:41.363 [2024-12-13 23:06:20.467012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.363 [2024-12-13 23:06:20.473340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.363 [2024-12-13 23:06:20.473395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:41.363 [2024-12-13 23:06:20.473407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.287 ms 00:25:41.363 [2024-12-13 23:06:20.473422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.363 [2024-12-13 23:06:20.500607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.363 [2024-12-13 23:06:20.500656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:41.363 [2024-12-13 23:06:20.500671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.144 ms 00:25:41.363 [2024-12-13 23:06:20.500679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.624 [2024-12-13 23:06:20.517498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.624 [2024-12-13 23:06:20.517544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:41.624 [2024-12-13 23:06:20.517557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.772 ms 00:25:41.624 [2024-12-13 23:06:20.517566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.887 [2024-12-13 23:06:20.887115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.887 [2024-12-13 23:06:20.887174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:41.887 [2024-12-13 23:06:20.887188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 369.497 ms 00:25:41.887 [2024-12-13 23:06:20.887197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.887 [2024-12-13 23:06:20.912818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.887 [2024-12-13 23:06:20.912863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:41.887 [2024-12-13 23:06:20.912876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.604 ms 00:25:41.887 [2024-12-13 23:06:20.912884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.887 [2024-12-13 23:06:20.938262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.887 [2024-12-13 23:06:20.938304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:41.887 [2024-12-13 23:06:20.938317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.333 ms 00:25:41.887 [2024-12-13 23:06:20.938324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.887 [2024-12-13 23:06:20.962924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.887 [2024-12-13 23:06:20.962967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:41.887 [2024-12-13 23:06:20.962978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.556 ms 00:25:41.887 [2024-12-13 23:06:20.962986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.887 [2024-12-13 23:06:20.987901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.887 [2024-12-13 23:06:20.987958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:41.887 [2024-12-13 23:06:20.987970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.827 ms 00:25:41.887 [2024-12-13 23:06:20.987978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.887 [2024-12-13 23:06:20.988022] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:41.887 [2024-12-13 23:06:20.988040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:41.887 [2024-12-13 23:06:20.988052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:41.887 [2024-12-13 23:06:20.988382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:41.888 [2024-12-13 23:06:20.988872] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:41.888 [2024-12-13 23:06:20.988881] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae297e55-fec4-44cb-be54-881087e1b917 00:25:41.888 [2024-12-13 23:06:20.988891] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:41.888 [2024-12-13 23:06:20.988899] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 51136 00:25:41.888 [2024-12-13 23:06:20.988906] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 50176 00:25:41.888 [2024-12-13 23:06:20.988918] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0191 00:25:41.888 [2024-12-13 23:06:20.988931] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:41.888 [2024-12-13 23:06:20.988949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:41.888 [2024-12-13 23:06:20.988957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:41.888 [2024-12-13 23:06:20.988963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:41.888 [2024-12-13 23:06:20.988973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:41.888 [2024-12-13 23:06:20.988982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.888 [2024-12-13 23:06:20.988989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:41.888 [2024-12-13 23:06:20.988998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:25:41.888 [2024-12-13 23:06:20.989005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.888 [2024-12-13 23:06:21.003584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.888 [2024-12-13 23:06:21.003625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:41.888 [2024-12-13 23:06:21.003646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.560 ms 00:25:41.888 [2024-12-13 23:06:21.003655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.888 [2024-12-13 23:06:21.004132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.888 [2024-12-13 23:06:21.004155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:41.888 [2024-12-13 23:06:21.004166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:25:41.888 [2024-12-13 23:06:21.004174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.150 [2024-12-13 23:06:21.043245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.150 [2024-12-13 23:06:21.043294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.150 [2024-12-13 23:06:21.043308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.150 [2024-12-13 23:06:21.043319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.150 [2024-12-13 23:06:21.043399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.150 [2024-12-13 23:06:21.043410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.150 [2024-12-13 23:06:21.043420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.150 [2024-12-13 23:06:21.043431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.150 [2024-12-13 23:06:21.043503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.150 [2024-12-13 23:06:21.043516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.150 [2024-12-13 23:06:21.043532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.150 [2024-12-13 23:06:21.043557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.150 [2024-12-13 23:06:21.043575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.150 [2024-12-13 23:06:21.043585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.150 [2024-12-13 23:06:21.043593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.150 [2024-12-13 23:06:21.043602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.150 [2024-12-13 23:06:21.134718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.150 [2024-12-13 23:06:21.135069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.150 [2024-12-13 23:06:21.135094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.150 [2024-12-13 23:06:21.135104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.150 [2024-12-13 23:06:21.208509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.150 [2024-12-13 23:06:21.208574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.150 [2024-12-13 23:06:21.208589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.150 [2024-12-13 23:06:21.208599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.150 [2024-12-13 23:06:21.208715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.150 [2024-12-13 23:06:21.208727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.150 [2024-12-13 23:06:21.208737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.150 [2024-12-13 23:06:21.208752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.150 [2024-12-13 23:06:21.208829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.150 [2024-12-13 23:06:21.208841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.150 [2024-12-13 23:06:21.208851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.150 [2024-12-13 23:06:21.208861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.150 [2024-12-13 23:06:21.208973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.150 [2024-12-13 23:06:21.208986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.150 [2024-12-13 23:06:21.208996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.150 [2024-12-13 23:06:21.209005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.150 [2024-12-13 23:06:21.209047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.150 [2024-12-13 23:06:21.209060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:42.150 [2024-12-13 23:06:21.209069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.151 [2024-12-13 23:06:21.209080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.151 [2024-12-13 23:06:21.209129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.151 [2024-12-13 23:06:21.209141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.151 [2024-12-13 23:06:21.209151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.151 [2024-12-13 23:06:21.209160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.151 [2024-12-13 23:06:21.209219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.151 [2024-12-13 23:06:21.209232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.151 [2024-12-13 23:06:21.209242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.151 [2024-12-13 23:06:21.209251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.151 [2024-12-13 23:06:21.209412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 754.608 ms, result 0 00:25:43.094 00:25:43.094 00:25:43.094 23:06:22 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:45.646 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:45.646 23:06:24 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:45.646 23:06:24 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:45.646 23:06:24 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:45.646 23:06:24 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:45.646 23:06:24 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:45.646 Process with pid 79036 is not found 00:25:45.646 23:06:24 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79036 00:25:45.646 23:06:24 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79036 ']' 00:25:45.647 23:06:24 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79036 00:25:45.647 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79036) - No such process 00:25:45.647 23:06:24 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79036 is not found' 00:25:45.647 23:06:24 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:45.647 Remove shared memory files 00:25:45.647 23:06:24 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:45.647 23:06:24 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:45.647 23:06:24 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:45.647 23:06:24 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:45.647 23:06:24 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:45.647 23:06:24 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:45.647 ************************************ 00:25:45.647 END TEST ftl_restore 00:25:45.647 ************************************ 00:25:45.647 00:25:45.647 real 5m3.748s 00:25:45.647 user 4m51.397s 00:25:45.647 sys 0m12.216s 00:25:45.647 23:06:24 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.647 23:06:24 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:45.647 23:06:24 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:45.647 23:06:24 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:45.647 23:06:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.647 23:06:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:45.647 ************************************ 00:25:45.647 START TEST ftl_dirty_shutdown 00:25:45.647 ************************************ 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:45.647 * Looking for test storage... 00:25:45.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.647 --rc genhtml_branch_coverage=1 00:25:45.647 --rc genhtml_function_coverage=1 00:25:45.647 --rc genhtml_legend=1 00:25:45.647 --rc geninfo_all_blocks=1 00:25:45.647 --rc geninfo_unexecuted_blocks=1 00:25:45.647 00:25:45.647 ' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.647 --rc genhtml_branch_coverage=1 00:25:45.647 --rc genhtml_function_coverage=1 00:25:45.647 --rc genhtml_legend=1 00:25:45.647 --rc geninfo_all_blocks=1 00:25:45.647 --rc geninfo_unexecuted_blocks=1 00:25:45.647 00:25:45.647 ' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.647 --rc genhtml_branch_coverage=1 00:25:45.647 --rc genhtml_function_coverage=1 00:25:45.647 --rc genhtml_legend=1 00:25:45.647 --rc geninfo_all_blocks=1 00:25:45.647 --rc geninfo_unexecuted_blocks=1 00:25:45.647 00:25:45.647 ' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.647 --rc genhtml_branch_coverage=1 00:25:45.647 --rc genhtml_function_coverage=1 00:25:45.647 --rc genhtml_legend=1 00:25:45.647 --rc geninfo_all_blocks=1 00:25:45.647 --rc geninfo_unexecuted_blocks=1 00:25:45.647 00:25:45.647 ' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:45.647 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82221 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82221 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82221 ']' 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.648 23:06:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:45.910 [2024-12-13 23:06:24.787265] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:45.910 [2024-12-13 23:06:24.787680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82221 ] 00:25:45.910 [2024-12-13 23:06:24.947186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.171 [2024-12-13 23:06:25.089327] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.114 23:06:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.114 23:06:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:47.114 23:06:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:47.114 23:06:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:47.114 23:06:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:47.114 23:06:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:47.114 23:06:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:47.114 23:06:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:47.114 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:47.114 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:47.114 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:47.114 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:47.114 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:47.114 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:47.114 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:47.114 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:47.377 { 00:25:47.377 "name": "nvme0n1", 00:25:47.377 "aliases": [ 00:25:47.377 "0b3e88ac-8aca-4ab1-a5b1-58336c89ab2b" 00:25:47.377 ], 00:25:47.377 "product_name": "NVMe disk", 00:25:47.377 "block_size": 4096, 00:25:47.377 "num_blocks": 1310720, 00:25:47.377 "uuid": "0b3e88ac-8aca-4ab1-a5b1-58336c89ab2b", 00:25:47.377 "numa_id": -1, 00:25:47.377 "assigned_rate_limits": { 00:25:47.377 "rw_ios_per_sec": 0, 00:25:47.377 "rw_mbytes_per_sec": 0, 00:25:47.377 "r_mbytes_per_sec": 0, 00:25:47.377 "w_mbytes_per_sec": 0 00:25:47.377 }, 00:25:47.377 "claimed": true, 00:25:47.377 "claim_type": "read_many_write_one", 00:25:47.377 "zoned": false, 00:25:47.377 "supported_io_types": { 00:25:47.377 "read": true, 00:25:47.377 "write": true, 00:25:47.377 "unmap": true, 00:25:47.377 "flush": true, 00:25:47.377 "reset": true, 00:25:47.377 "nvme_admin": true, 00:25:47.377 "nvme_io": true, 00:25:47.377 "nvme_io_md": false, 00:25:47.377 "write_zeroes": true, 00:25:47.377 "zcopy": false, 00:25:47.377 "get_zone_info": false, 00:25:47.377 "zone_management": false, 00:25:47.377 "zone_append": false, 00:25:47.377 "compare": true, 00:25:47.377 "compare_and_write": false, 00:25:47.377 "abort": true, 00:25:47.377 "seek_hole": false, 00:25:47.377 "seek_data": false, 00:25:47.377 "copy": true, 00:25:47.377 "nvme_iov_md": false 00:25:47.377 }, 00:25:47.377 "driver_specific": { 00:25:47.377 "nvme": [ 00:25:47.377 { 00:25:47.377 "pci_address": "0000:00:11.0", 00:25:47.377 "trid": { 00:25:47.377 "trtype": "PCIe", 00:25:47.377 "traddr": "0000:00:11.0" 00:25:47.377 }, 00:25:47.377 "ctrlr_data": { 00:25:47.377 "cntlid": 0, 00:25:47.377 "vendor_id": "0x1b36", 00:25:47.377 "model_number": "QEMU NVMe Ctrl", 00:25:47.377 "serial_number": "12341", 00:25:47.377 "firmware_revision": "8.0.0", 00:25:47.377 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:47.377 "oacs": { 00:25:47.377 "security": 0, 00:25:47.377 "format": 1, 00:25:47.377 "firmware": 0, 00:25:47.377 "ns_manage": 1 00:25:47.377 }, 00:25:47.377 "multi_ctrlr": false, 00:25:47.377 "ana_reporting": false 00:25:47.377 }, 00:25:47.377 "vs": { 00:25:47.377 "nvme_version": "1.4" 00:25:47.377 }, 00:25:47.377 "ns_data": { 00:25:47.377 "id": 1, 00:25:47.377 "can_share": false 00:25:47.377 } 00:25:47.377 } 00:25:47.377 ], 00:25:47.377 "mp_policy": "active_passive" 00:25:47.377 } 00:25:47.377 } 00:25:47.377 ]' 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:47.377 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:47.638 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=4939122a-ea42-44d5-a44f-7465ef975327 00:25:47.638 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:47.638 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4939122a-ea42-44d5-a44f-7465ef975327 00:25:47.900 23:06:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:48.212 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a02ee0f1-ea3b-4d15-b126-74af5d1ab49b 00:25:48.212 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a02ee0f1-ea3b-4d15-b126-74af5d1ab49b 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:48.501 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:48.501 { 00:25:48.501 "name": "58a9fa72-3607-45e4-afbd-c7eda2df1b64", 00:25:48.501 "aliases": [ 00:25:48.501 "lvs/nvme0n1p0" 00:25:48.501 ], 00:25:48.501 "product_name": "Logical Volume", 00:25:48.501 "block_size": 4096, 00:25:48.501 "num_blocks": 26476544, 00:25:48.501 "uuid": "58a9fa72-3607-45e4-afbd-c7eda2df1b64", 00:25:48.501 "assigned_rate_limits": { 00:25:48.501 "rw_ios_per_sec": 0, 00:25:48.501 "rw_mbytes_per_sec": 0, 00:25:48.501 "r_mbytes_per_sec": 0, 00:25:48.501 "w_mbytes_per_sec": 0 00:25:48.501 }, 00:25:48.501 "claimed": false, 00:25:48.501 "zoned": false, 00:25:48.501 "supported_io_types": { 00:25:48.501 "read": true, 00:25:48.501 "write": true, 00:25:48.501 "unmap": true, 00:25:48.501 "flush": false, 00:25:48.501 "reset": true, 00:25:48.501 "nvme_admin": false, 00:25:48.501 "nvme_io": false, 00:25:48.501 "nvme_io_md": false, 00:25:48.501 "write_zeroes": true, 00:25:48.501 "zcopy": false, 00:25:48.501 "get_zone_info": false, 00:25:48.501 "zone_management": false, 00:25:48.501 "zone_append": false, 00:25:48.501 "compare": false, 00:25:48.501 "compare_and_write": false, 00:25:48.501 "abort": false, 00:25:48.502 "seek_hole": true, 00:25:48.502 "seek_data": true, 00:25:48.502 "copy": false, 00:25:48.502 "nvme_iov_md": false 00:25:48.502 }, 00:25:48.502 "driver_specific": { 00:25:48.502 "lvol": { 00:25:48.502 "lvol_store_uuid": "a02ee0f1-ea3b-4d15-b126-74af5d1ab49b", 00:25:48.502 "base_bdev": "nvme0n1", 00:25:48.502 "thin_provision": true, 00:25:48.502 "num_allocated_clusters": 0, 00:25:48.502 "snapshot": false, 00:25:48.502 "clone": false, 00:25:48.502 "esnap_clone": false 00:25:48.502 } 00:25:48.502 } 00:25:48.502 } 00:25:48.502 ]' 00:25:48.502 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:48.502 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:48.502 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:48.502 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:48.502 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:48.502 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:48.763 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:48.763 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:48.763 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:48.763 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:48.763 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:48.763 23:06:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:48.763 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:48.763 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:48.763 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:48.763 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:49.023 23:06:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:49.023 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:49.023 { 00:25:49.023 "name": "58a9fa72-3607-45e4-afbd-c7eda2df1b64", 00:25:49.023 "aliases": [ 00:25:49.024 "lvs/nvme0n1p0" 00:25:49.024 ], 00:25:49.024 "product_name": "Logical Volume", 00:25:49.024 "block_size": 4096, 00:25:49.024 "num_blocks": 26476544, 00:25:49.024 "uuid": "58a9fa72-3607-45e4-afbd-c7eda2df1b64", 00:25:49.024 "assigned_rate_limits": { 00:25:49.024 "rw_ios_per_sec": 0, 00:25:49.024 "rw_mbytes_per_sec": 0, 00:25:49.024 "r_mbytes_per_sec": 0, 00:25:49.024 "w_mbytes_per_sec": 0 00:25:49.024 }, 00:25:49.024 "claimed": false, 00:25:49.024 "zoned": false, 00:25:49.024 "supported_io_types": { 00:25:49.024 "read": true, 00:25:49.024 "write": true, 00:25:49.024 "unmap": true, 00:25:49.024 "flush": false, 00:25:49.024 "reset": true, 00:25:49.024 "nvme_admin": false, 00:25:49.024 "nvme_io": false, 00:25:49.024 "nvme_io_md": false, 00:25:49.024 "write_zeroes": true, 00:25:49.024 "zcopy": false, 00:25:49.024 "get_zone_info": false, 00:25:49.024 "zone_management": false, 00:25:49.024 "zone_append": false, 00:25:49.024 "compare": false, 00:25:49.024 "compare_and_write": false, 00:25:49.024 "abort": false, 00:25:49.024 "seek_hole": true, 00:25:49.024 "seek_data": true, 00:25:49.024 "copy": false, 00:25:49.024 "nvme_iov_md": false 00:25:49.024 }, 00:25:49.024 "driver_specific": { 00:25:49.024 "lvol": { 00:25:49.024 "lvol_store_uuid": "a02ee0f1-ea3b-4d15-b126-74af5d1ab49b", 00:25:49.024 "base_bdev": "nvme0n1", 00:25:49.024 "thin_provision": true, 00:25:49.024 "num_allocated_clusters": 0, 00:25:49.024 "snapshot": false, 00:25:49.024 "clone": false, 00:25:49.024 "esnap_clone": false 00:25:49.024 } 00:25:49.024 } 00:25:49.024 } 00:25:49.024 ]' 00:25:49.024 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:49.024 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:49.024 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:49.285 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58a9fa72-3607-45e4-afbd-c7eda2df1b64 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:49.546 { 00:25:49.546 "name": "58a9fa72-3607-45e4-afbd-c7eda2df1b64", 00:25:49.546 "aliases": [ 00:25:49.546 "lvs/nvme0n1p0" 00:25:49.546 ], 00:25:49.546 "product_name": "Logical Volume", 00:25:49.546 "block_size": 4096, 00:25:49.546 "num_blocks": 26476544, 00:25:49.546 "uuid": "58a9fa72-3607-45e4-afbd-c7eda2df1b64", 00:25:49.546 "assigned_rate_limits": { 00:25:49.546 "rw_ios_per_sec": 0, 00:25:49.546 "rw_mbytes_per_sec": 0, 00:25:49.546 "r_mbytes_per_sec": 0, 00:25:49.546 "w_mbytes_per_sec": 0 00:25:49.546 }, 00:25:49.546 "claimed": false, 00:25:49.546 "zoned": false, 00:25:49.546 "supported_io_types": { 00:25:49.546 "read": true, 00:25:49.546 "write": true, 00:25:49.546 "unmap": true, 00:25:49.546 "flush": false, 00:25:49.546 "reset": true, 00:25:49.546 "nvme_admin": false, 00:25:49.546 "nvme_io": false, 00:25:49.546 "nvme_io_md": false, 00:25:49.546 "write_zeroes": true, 00:25:49.546 "zcopy": false, 00:25:49.546 "get_zone_info": false, 00:25:49.546 "zone_management": false, 00:25:49.546 "zone_append": false, 00:25:49.546 "compare": false, 00:25:49.546 "compare_and_write": false, 00:25:49.546 "abort": false, 00:25:49.546 "seek_hole": true, 00:25:49.546 "seek_data": true, 00:25:49.546 "copy": false, 00:25:49.546 "nvme_iov_md": false 00:25:49.546 }, 00:25:49.546 "driver_specific": { 00:25:49.546 "lvol": { 00:25:49.546 "lvol_store_uuid": "a02ee0f1-ea3b-4d15-b126-74af5d1ab49b", 00:25:49.546 "base_bdev": "nvme0n1", 00:25:49.546 "thin_provision": true, 00:25:49.546 "num_allocated_clusters": 0, 00:25:49.546 "snapshot": false, 00:25:49.546 "clone": false, 00:25:49.546 "esnap_clone": false 00:25:49.546 } 00:25:49.546 } 00:25:49.546 } 00:25:49.546 ]' 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 58a9fa72-3607-45e4-afbd-c7eda2df1b64 --l2p_dram_limit 10' 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:49.546 23:06:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 58a9fa72-3607-45e4-afbd-c7eda2df1b64 --l2p_dram_limit 10 -c nvc0n1p0 00:25:49.808 [2024-12-13 23:06:28.830660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.808 [2024-12-13 23:06:28.830704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:49.808 [2024-12-13 23:06:28.830718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:49.808 [2024-12-13 23:06:28.830725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.808 [2024-12-13 23:06:28.830772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.808 [2024-12-13 23:06:28.830780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:49.808 [2024-12-13 23:06:28.830788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:49.808 [2024-12-13 23:06:28.830794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.808 [2024-12-13 23:06:28.830813] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:49.808 [2024-12-13 23:06:28.831422] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:49.808 [2024-12-13 23:06:28.831444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.808 [2024-12-13 23:06:28.831450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:49.808 [2024-12-13 23:06:28.831459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:25:49.808 [2024-12-13 23:06:28.831465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.808 [2024-12-13 23:06:28.831619] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 80fd5bf1-b48b-459d-9945-757b4839e422 00:25:49.808 [2024-12-13 23:06:28.832893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.808 [2024-12-13 23:06:28.832922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:49.808 [2024-12-13 23:06:28.832931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:49.808 [2024-12-13 23:06:28.832940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.808 [2024-12-13 23:06:28.839796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.808 [2024-12-13 23:06:28.839824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:49.808 [2024-12-13 23:06:28.839832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.817 ms 00:25:49.808 [2024-12-13 23:06:28.839840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.808 [2024-12-13 23:06:28.839908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.808 [2024-12-13 23:06:28.839917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:49.808 [2024-12-13 23:06:28.839923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:49.808 [2024-12-13 23:06:28.839934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.808 [2024-12-13 23:06:28.839963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.808 [2024-12-13 23:06:28.839971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:49.808 [2024-12-13 23:06:28.839979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:49.808 [2024-12-13 23:06:28.839988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.808 [2024-12-13 23:06:28.840004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:49.808 [2024-12-13 23:06:28.843202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.808 [2024-12-13 23:06:28.843224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:49.808 [2024-12-13 23:06:28.843234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.199 ms 00:25:49.808 [2024-12-13 23:06:28.843241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.808 [2024-12-13 23:06:28.843271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.808 [2024-12-13 23:06:28.843277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:49.808 [2024-12-13 23:06:28.843285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:49.808 [2024-12-13 23:06:28.843291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.808 [2024-12-13 23:06:28.843325] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:49.808 [2024-12-13 23:06:28.843436] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:49.808 [2024-12-13 23:06:28.843451] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:49.808 [2024-12-13 23:06:28.843459] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:49.808 [2024-12-13 23:06:28.843469] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:49.808 [2024-12-13 23:06:28.843476] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:49.808 [2024-12-13 23:06:28.843484] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:49.808 [2024-12-13 23:06:28.843489] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:49.808 [2024-12-13 23:06:28.843500] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:49.808 [2024-12-13 23:06:28.843505] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:49.808 [2024-12-13 23:06:28.843514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.808 [2024-12-13 23:06:28.843524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:49.808 [2024-12-13 23:06:28.843532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:25:49.808 [2024-12-13 23:06:28.843547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.808 [2024-12-13 23:06:28.843615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.808 [2024-12-13 23:06:28.843621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:49.808 [2024-12-13 23:06:28.843629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:49.808 [2024-12-13 23:06:28.843635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.808 [2024-12-13 23:06:28.843712] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:49.808 [2024-12-13 23:06:28.843720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:49.808 [2024-12-13 23:06:28.843728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.808 [2024-12-13 23:06:28.843734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.808 [2024-12-13 23:06:28.843742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:49.808 [2024-12-13 23:06:28.843748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:49.808 [2024-12-13 23:06:28.843764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:49.808 [2024-12-13 23:06:28.843771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:49.808 [2024-12-13 23:06:28.843778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:49.808 [2024-12-13 23:06:28.843783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.808 [2024-12-13 23:06:28.843791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:49.808 [2024-12-13 23:06:28.843799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:49.808 [2024-12-13 23:06:28.843806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.808 [2024-12-13 23:06:28.843812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:49.808 [2024-12-13 23:06:28.843819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:49.808 [2024-12-13 23:06:28.843824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.808 [2024-12-13 23:06:28.843832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:49.808 [2024-12-13 23:06:28.843838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:49.809 [2024-12-13 23:06:28.843845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.809 [2024-12-13 23:06:28.843851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:49.809 [2024-12-13 23:06:28.843857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:49.809 [2024-12-13 23:06:28.843862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.809 [2024-12-13 23:06:28.843869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:49.809 [2024-12-13 23:06:28.843874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:49.809 [2024-12-13 23:06:28.843880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.809 [2024-12-13 23:06:28.843885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:49.809 [2024-12-13 23:06:28.843891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:49.809 [2024-12-13 23:06:28.843896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.809 [2024-12-13 23:06:28.843903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:49.809 [2024-12-13 23:06:28.843909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:49.809 [2024-12-13 23:06:28.843915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.809 [2024-12-13 23:06:28.843920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:49.809 [2024-12-13 23:06:28.843928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:49.809 [2024-12-13 23:06:28.843934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.809 [2024-12-13 23:06:28.843940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:49.809 [2024-12-13 23:06:28.843945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:49.809 [2024-12-13 23:06:28.843953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.809 [2024-12-13 23:06:28.843958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:49.809 [2024-12-13 23:06:28.843964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:49.809 [2024-12-13 23:06:28.843969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.809 [2024-12-13 23:06:28.843975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:49.809 [2024-12-13 23:06:28.843980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:49.809 [2024-12-13 23:06:28.843986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.809 [2024-12-13 23:06:28.843993] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:49.809 [2024-12-13 23:06:28.844001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:49.809 [2024-12-13 23:06:28.844007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.809 [2024-12-13 23:06:28.844014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.809 [2024-12-13 23:06:28.844020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:49.809 [2024-12-13 23:06:28.844028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:49.809 [2024-12-13 23:06:28.844034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:49.809 [2024-12-13 23:06:28.844041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:49.809 [2024-12-13 23:06:28.844046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:49.809 [2024-12-13 23:06:28.844052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:49.809 [2024-12-13 23:06:28.844059] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:49.809 [2024-12-13 23:06:28.844069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.809 [2024-12-13 23:06:28.844078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:49.809 [2024-12-13 23:06:28.844085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:49.809 [2024-12-13 23:06:28.844091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:49.809 [2024-12-13 23:06:28.844097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:49.809 [2024-12-13 23:06:28.844103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:49.809 [2024-12-13 23:06:28.844110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:49.809 [2024-12-13 23:06:28.844116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:49.809 [2024-12-13 23:06:28.844123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:49.809 [2024-12-13 23:06:28.844129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:49.809 [2024-12-13 23:06:28.844138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:49.809 [2024-12-13 23:06:28.844144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:49.809 [2024-12-13 23:06:28.844153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:49.809 [2024-12-13 23:06:28.844162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:49.809 [2024-12-13 23:06:28.844169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:49.809 [2024-12-13 23:06:28.844175] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:49.809 [2024-12-13 23:06:28.844182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.809 [2024-12-13 23:06:28.844188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:49.809 [2024-12-13 23:06:28.844195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:49.809 [2024-12-13 23:06:28.844201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:49.809 [2024-12-13 23:06:28.844208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:49.809 [2024-12-13 23:06:28.844215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.809 [2024-12-13 23:06:28.844223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:49.809 [2024-12-13 23:06:28.844229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:25:49.809 [2024-12-13 23:06:28.844237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.809 [2024-12-13 23:06:28.844275] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:49.809 [2024-12-13 23:06:28.844287] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:54.023 [2024-12-13 23:06:32.797205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.023 [2024-12-13 23:06:32.797304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:54.023 [2024-12-13 23:06:32.797326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3952.915 ms 00:25:54.023 [2024-12-13 23:06:32.797339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.023 [2024-12-13 23:06:32.834933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.023 [2024-12-13 23:06:32.835008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:54.023 [2024-12-13 23:06:32.835025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.324 ms 00:25:54.023 [2024-12-13 23:06:32.835037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.023 [2024-12-13 23:06:32.835197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.023 [2024-12-13 23:06:32.835214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:54.024 [2024-12-13 23:06:32.835223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:25:54.024 [2024-12-13 23:06:32.835243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.024 [2024-12-13 23:06:32.875517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.024 [2024-12-13 23:06:32.875597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:54.024 [2024-12-13 23:06:32.875611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.233 ms 00:25:54.024 [2024-12-13 23:06:32.875622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.024 [2024-12-13 23:06:32.875666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.024 [2024-12-13 23:06:32.875682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:54.024 [2024-12-13 23:06:32.875692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:54.024 [2024-12-13 23:06:32.875712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.024 [2024-12-13 23:06:32.876470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.024 [2024-12-13 23:06:32.876526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:54.024 [2024-12-13 23:06:32.876539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:25:54.024 [2024-12-13 23:06:32.876551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.024 [2024-12-13 23:06:32.876676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.024 [2024-12-13 23:06:32.876692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:54.024 [2024-12-13 23:06:32.876707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:25:54.024 [2024-12-13 23:06:32.876720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.024 [2024-12-13 23:06:32.897269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.024 [2024-12-13 23:06:32.897325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:54.024 [2024-12-13 23:06:32.897338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.528 ms 00:25:54.024 [2024-12-13 23:06:32.897350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.024 [2024-12-13 23:06:32.924521] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:54.024 [2024-12-13 23:06:32.929577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.024 [2024-12-13 23:06:32.929627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:54.024 [2024-12-13 23:06:32.929644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.130 ms 00:25:54.024 [2024-12-13 23:06:32.929653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.024 [2024-12-13 23:06:33.029378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.024 [2024-12-13 23:06:33.029434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:54.024 [2024-12-13 23:06:33.029451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.672 ms 00:25:54.024 [2024-12-13 23:06:33.029462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.024 [2024-12-13 23:06:33.029686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.024 [2024-12-13 23:06:33.029705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:54.024 [2024-12-13 23:06:33.029722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:25:54.024 [2024-12-13 23:06:33.029731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.024 [2024-12-13 23:06:33.056423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.024 [2024-12-13 23:06:33.056486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:54.024 [2024-12-13 23:06:33.056505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.615 ms 00:25:54.024 [2024-12-13 23:06:33.056513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.024 [2024-12-13 23:06:33.081912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.024 [2024-12-13 23:06:33.081959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:54.024 [2024-12-13 23:06:33.081977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.336 ms 00:25:54.024 [2024-12-13 23:06:33.081986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.024 [2024-12-13 23:06:33.082597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.024 [2024-12-13 23:06:33.082622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:54.024 [2024-12-13 23:06:33.082635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:25:54.024 [2024-12-13 23:06:33.082647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.285 [2024-12-13 23:06:33.172959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.285 [2024-12-13 23:06:33.173013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:54.285 [2024-12-13 23:06:33.173035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.250 ms 00:25:54.285 [2024-12-13 23:06:33.173044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.285 [2024-12-13 23:06:33.202085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.285 [2024-12-13 23:06:33.202136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:54.285 [2024-12-13 23:06:33.202153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.943 ms 00:25:54.285 [2024-12-13 23:06:33.202162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.285 [2024-12-13 23:06:33.228145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.285 [2024-12-13 23:06:33.228196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:54.285 [2024-12-13 23:06:33.228213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.926 ms 00:25:54.285 [2024-12-13 23:06:33.228222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.285 [2024-12-13 23:06:33.254137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.285 [2024-12-13 23:06:33.254188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:54.285 [2024-12-13 23:06:33.254204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.860 ms 00:25:54.285 [2024-12-13 23:06:33.254211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.285 [2024-12-13 23:06:33.254271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.285 [2024-12-13 23:06:33.254281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:54.285 [2024-12-13 23:06:33.254299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:54.285 [2024-12-13 23:06:33.254307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.285 [2024-12-13 23:06:33.254417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.285 [2024-12-13 23:06:33.254436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:54.285 [2024-12-13 23:06:33.254448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:54.285 [2024-12-13 23:06:33.254457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.285 [2024-12-13 23:06:33.256515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4425.255 ms, result 0 00:25:54.285 { 00:25:54.285 "name": "ftl0", 00:25:54.285 "uuid": "80fd5bf1-b48b-459d-9945-757b4839e422" 00:25:54.285 } 00:25:54.285 23:06:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:54.285 23:06:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:54.547 23:06:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:54.547 23:06:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:54.547 23:06:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:54.809 /dev/nbd0 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:54.809 1+0 records in 00:25:54.809 1+0 records out 00:25:54.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549347 s, 7.5 MB/s 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:54.809 23:06:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:54.809 [2024-12-13 23:06:33.807263] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:54.809 [2024-12-13 23:06:33.807410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82376 ] 00:25:55.071 [2024-12-13 23:06:33.970698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.071 [2024-12-13 23:06:34.090598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.458  [2024-12-13T23:06:36.541Z] Copying: 194/1024 [MB] (194 MBps) [2024-12-13T23:06:37.477Z] Copying: 391/1024 [MB] (196 MBps) [2024-12-13T23:06:38.412Z] Copying: 644/1024 [MB] (253 MBps) [2024-12-13T23:06:38.978Z] Copying: 899/1024 [MB] (255 MBps) [2024-12-13T23:06:39.547Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:26:00.407 00:26:00.407 23:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:02.320 23:06:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:02.320 [2024-12-13 23:06:41.431380] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:02.320 [2024-12-13 23:06:41.431472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82453 ] 00:26:02.580 [2024-12-13 23:06:41.578329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.580 [2024-12-13 23:06:41.654086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.955  [2024-12-13T23:06:44.140Z] Copying: 33/1024 [MB] (33 MBps) [2024-12-13T23:06:45.075Z] Copying: 63/1024 [MB] (30 MBps) [2024-12-13T23:06:46.014Z] Copying: 92/1024 [MB] (29 MBps) [2024-12-13T23:06:46.949Z] Copying: 122/1024 [MB] (29 MBps) [2024-12-13T23:06:47.882Z] Copying: 155/1024 [MB] (32 MBps) [2024-12-13T23:06:49.257Z] Copying: 189/1024 [MB] (34 MBps) [2024-12-13T23:06:50.192Z] Copying: 221/1024 [MB] (31 MBps) [2024-12-13T23:06:51.125Z] Copying: 252/1024 [MB] (30 MBps) [2024-12-13T23:06:52.061Z] Copying: 281/1024 [MB] (28 MBps) [2024-12-13T23:06:52.997Z] Copying: 312/1024 [MB] (31 MBps) [2024-12-13T23:06:53.932Z] Copying: 343/1024 [MB] (31 MBps) [2024-12-13T23:06:54.867Z] Copying: 375/1024 [MB] (31 MBps) [2024-12-13T23:06:56.240Z] Copying: 410/1024 [MB] (34 MBps) [2024-12-13T23:06:56.901Z] Copying: 441/1024 [MB] (31 MBps) [2024-12-13T23:06:57.837Z] Copying: 475/1024 [MB] (33 MBps) [2024-12-13T23:06:59.212Z] Copying: 509/1024 [MB] (34 MBps) [2024-12-13T23:07:00.146Z] Copying: 543/1024 [MB] (34 MBps) [2024-12-13T23:07:01.080Z] Copying: 578/1024 [MB] (34 MBps) [2024-12-13T23:07:02.011Z] Copying: 612/1024 [MB] (34 MBps) [2024-12-13T23:07:02.946Z] Copying: 647/1024 [MB] (34 MBps) [2024-12-13T23:07:03.880Z] Copying: 682/1024 [MB] (35 MBps) [2024-12-13T23:07:05.257Z] Copying: 716/1024 [MB] (34 MBps) [2024-12-13T23:07:06.190Z] Copying: 751/1024 [MB] (34 MBps) [2024-12-13T23:07:07.125Z] Copying: 785/1024 [MB] (34 MBps) [2024-12-13T23:07:08.059Z] Copying: 820/1024 [MB] (34 MBps) [2024-12-13T23:07:08.992Z] Copying: 855/1024 [MB] (35 MBps) [2024-12-13T23:07:09.925Z] Copying: 890/1024 [MB] (35 MBps) [2024-12-13T23:07:10.860Z] Copying: 921/1024 [MB] (31 MBps) [2024-12-13T23:07:12.233Z] Copying: 956/1024 [MB] (34 MBps) [2024-12-13T23:07:13.167Z] Copying: 988/1024 [MB] (32 MBps) [2024-12-13T23:07:13.167Z] Copying: 1018/1024 [MB] (29 MBps) [2024-12-13T23:07:13.735Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:26:34.595 00:26:34.595 23:07:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:34.595 23:07:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:34.595 23:07:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:34.856 [2024-12-13 23:07:13.873566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.856 [2024-12-13 23:07:13.873614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:34.856 [2024-12-13 23:07:13.873627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:34.856 [2024-12-13 23:07:13.873636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.856 [2024-12-13 23:07:13.873657] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:34.856 [2024-12-13 23:07:13.875910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.856 [2024-12-13 23:07:13.875935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:34.856 [2024-12-13 23:07:13.875945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:26:34.856 [2024-12-13 23:07:13.875952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.856 [2024-12-13 23:07:13.878795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.856 [2024-12-13 23:07:13.878821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:34.856 [2024-12-13 23:07:13.878833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:26:34.856 [2024-12-13 23:07:13.878840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.856 [2024-12-13 23:07:13.893974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.856 [2024-12-13 23:07:13.894005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:34.856 [2024-12-13 23:07:13.894015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.118 ms 00:26:34.856 [2024-12-13 23:07:13.894021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.856 [2024-12-13 23:07:13.898639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.856 [2024-12-13 23:07:13.898664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:34.856 [2024-12-13 23:07:13.898674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.589 ms 00:26:34.856 [2024-12-13 23:07:13.898682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.856 [2024-12-13 23:07:13.918183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.856 [2024-12-13 23:07:13.918210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:34.856 [2024-12-13 23:07:13.918221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.453 ms 00:26:34.856 [2024-12-13 23:07:13.918227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.856 [2024-12-13 23:07:13.931654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.856 [2024-12-13 23:07:13.931682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:34.856 [2024-12-13 23:07:13.931695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.394 ms 00:26:34.856 [2024-12-13 23:07:13.931702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.856 [2024-12-13 23:07:13.931830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.856 [2024-12-13 23:07:13.931840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:34.856 [2024-12-13 23:07:13.931848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:26:34.856 [2024-12-13 23:07:13.931855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.856 [2024-12-13 23:07:13.950412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.856 [2024-12-13 23:07:13.950436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:34.856 [2024-12-13 23:07:13.950445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.542 ms 00:26:34.856 [2024-12-13 23:07:13.950452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.856 [2024-12-13 23:07:13.968417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.856 [2024-12-13 23:07:13.968442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:34.856 [2024-12-13 23:07:13.968451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.935 ms 00:26:34.856 [2024-12-13 23:07:13.968457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.856 [2024-12-13 23:07:13.985511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.856 [2024-12-13 23:07:13.985536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:34.856 [2024-12-13 23:07:13.985546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.022 ms 00:26:34.856 [2024-12-13 23:07:13.985551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.119 [2024-12-13 23:07:14.003317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.119 [2024-12-13 23:07:14.003341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:35.119 [2024-12-13 23:07:14.003351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.708 ms 00:26:35.119 [2024-12-13 23:07:14.003356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.119 [2024-12-13 23:07:14.003385] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:35.119 [2024-12-13 23:07:14.003397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:35.119 [2024-12-13 23:07:14.003717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.003998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:35.120 [2024-12-13 23:07:14.004115] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:35.120 [2024-12-13 23:07:14.004122] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 80fd5bf1-b48b-459d-9945-757b4839e422 00:26:35.120 [2024-12-13 23:07:14.004128] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:35.120 [2024-12-13 23:07:14.004136] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:35.120 [2024-12-13 23:07:14.004143] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:35.120 [2024-12-13 23:07:14.004152] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:35.120 [2024-12-13 23:07:14.004158] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:35.120 [2024-12-13 23:07:14.004165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:35.120 [2024-12-13 23:07:14.004171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:35.120 [2024-12-13 23:07:14.004190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:35.120 [2024-12-13 23:07:14.004198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:35.120 [2024-12-13 23:07:14.004205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.120 [2024-12-13 23:07:14.004211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:35.120 [2024-12-13 23:07:14.004220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:26:35.120 [2024-12-13 23:07:14.004226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.120 [2024-12-13 23:07:14.014511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.120 [2024-12-13 23:07:14.014536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:35.120 [2024-12-13 23:07:14.014545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.260 ms 00:26:35.120 [2024-12-13 23:07:14.014552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.120 [2024-12-13 23:07:14.014863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.120 [2024-12-13 23:07:14.014872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:35.120 [2024-12-13 23:07:14.014881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:26:35.120 [2024-12-13 23:07:14.014888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.120 [2024-12-13 23:07:14.049771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.120 [2024-12-13 23:07:14.049799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:35.120 [2024-12-13 23:07:14.049808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.120 [2024-12-13 23:07:14.049815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.120 [2024-12-13 23:07:14.049865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.120 [2024-12-13 23:07:14.049872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:35.120 [2024-12-13 23:07:14.049880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.120 [2024-12-13 23:07:14.049886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.120 [2024-12-13 23:07:14.049940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.120 [2024-12-13 23:07:14.049950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:35.120 [2024-12-13 23:07:14.049959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.120 [2024-12-13 23:07:14.049965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.120 [2024-12-13 23:07:14.049981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.120 [2024-12-13 23:07:14.049989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:35.120 [2024-12-13 23:07:14.049996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.120 [2024-12-13 23:07:14.050002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.120 [2024-12-13 23:07:14.113525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.120 [2024-12-13 23:07:14.113563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:35.120 [2024-12-13 23:07:14.113575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.120 [2024-12-13 23:07:14.113581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.120 [2024-12-13 23:07:14.165181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.120 [2024-12-13 23:07:14.165219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:35.120 [2024-12-13 23:07:14.165230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.121 [2024-12-13 23:07:14.165237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.121 [2024-12-13 23:07:14.165348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.121 [2024-12-13 23:07:14.165357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:35.121 [2024-12-13 23:07:14.165369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.121 [2024-12-13 23:07:14.165376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.121 [2024-12-13 23:07:14.165417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.121 [2024-12-13 23:07:14.165427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:35.121 [2024-12-13 23:07:14.165435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.121 [2024-12-13 23:07:14.165441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.121 [2024-12-13 23:07:14.165516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.121 [2024-12-13 23:07:14.165524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:35.121 [2024-12-13 23:07:14.165533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.121 [2024-12-13 23:07:14.165542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.121 [2024-12-13 23:07:14.165568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.121 [2024-12-13 23:07:14.165576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:35.121 [2024-12-13 23:07:14.165584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.121 [2024-12-13 23:07:14.165590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.121 [2024-12-13 23:07:14.165625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.121 [2024-12-13 23:07:14.165633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:35.121 [2024-12-13 23:07:14.165640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.121 [2024-12-13 23:07:14.165649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.121 [2024-12-13 23:07:14.165691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.121 [2024-12-13 23:07:14.165699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:35.121 [2024-12-13 23:07:14.165707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.121 [2024-12-13 23:07:14.165713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.121 [2024-12-13 23:07:14.165852] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 292.242 ms, result 0 00:26:35.121 true 00:26:35.121 23:07:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82221 00:26:35.121 23:07:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82221 00:26:35.121 23:07:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:35.121 [2024-12-13 23:07:14.254955] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:35.121 [2024-12-13 23:07:14.255071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82802 ] 00:26:35.381 [2024-12-13 23:07:14.410336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.381 [2024-12-13 23:07:14.506863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.764  [2024-12-13T23:07:16.844Z] Copying: 251/1024 [MB] (251 MBps) [2024-12-13T23:07:17.784Z] Copying: 506/1024 [MB] (254 MBps) [2024-12-13T23:07:18.726Z] Copying: 757/1024 [MB] (251 MBps) [2024-12-13T23:07:18.986Z] Copying: 1006/1024 [MB] (248 MBps) [2024-12-13T23:07:19.557Z] Copying: 1024/1024 [MB] (average 251 MBps) 00:26:40.417 00:26:40.417 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82221 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:40.418 23:07:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:40.418 [2024-12-13 23:07:19.445071] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:40.418 [2024-12-13 23:07:19.445192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82863 ] 00:26:40.678 [2024-12-13 23:07:19.598732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.678 [2024-12-13 23:07:19.682359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.939 [2024-12-13 23:07:19.916865] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:40.939 [2024-12-13 23:07:19.916924] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:40.939 [2024-12-13 23:07:19.980412] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:40.939 [2024-12-13 23:07:19.980937] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:40.939 [2024-12-13 23:07:19.981653] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:41.248 [2024-12-13 23:07:20.343597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.248 [2024-12-13 23:07:20.343632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:41.248 [2024-12-13 23:07:20.343643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:41.248 [2024-12-13 23:07:20.343652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.248 [2024-12-13 23:07:20.343690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.248 [2024-12-13 23:07:20.343699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:41.248 [2024-12-13 23:07:20.343706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:41.248 [2024-12-13 23:07:20.343712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.248 [2024-12-13 23:07:20.343726] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:41.248 [2024-12-13 23:07:20.344249] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:41.248 [2024-12-13 23:07:20.344268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.248 [2024-12-13 23:07:20.344274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:41.248 [2024-12-13 23:07:20.344282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:26:41.248 [2024-12-13 23:07:20.344288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.248 [2024-12-13 23:07:20.345550] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:41.248 [2024-12-13 23:07:20.356078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.248 [2024-12-13 23:07:20.356105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:41.248 [2024-12-13 23:07:20.356115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.529 ms 00:26:41.248 [2024-12-13 23:07:20.356121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.248 [2024-12-13 23:07:20.356168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.248 [2024-12-13 23:07:20.356177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:41.248 [2024-12-13 23:07:20.356183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:41.248 [2024-12-13 23:07:20.356189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.531 [2024-12-13 23:07:20.362431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.531 [2024-12-13 23:07:20.362455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:41.531 [2024-12-13 23:07:20.362462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:26:41.531 [2024-12-13 23:07:20.362468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.531 [2024-12-13 23:07:20.362526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.531 [2024-12-13 23:07:20.362533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:41.531 [2024-12-13 23:07:20.362540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:41.531 [2024-12-13 23:07:20.362546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.531 [2024-12-13 23:07:20.362585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.531 [2024-12-13 23:07:20.362594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:41.531 [2024-12-13 23:07:20.362601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:41.531 [2024-12-13 23:07:20.362607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.531 [2024-12-13 23:07:20.362621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:41.531 [2024-12-13 23:07:20.365729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.531 [2024-12-13 23:07:20.365752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:41.531 [2024-12-13 23:07:20.365768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.112 ms 00:26:41.531 [2024-12-13 23:07:20.365774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.531 [2024-12-13 23:07:20.365803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.531 [2024-12-13 23:07:20.365810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:41.531 [2024-12-13 23:07:20.365817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:41.531 [2024-12-13 23:07:20.365823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.531 [2024-12-13 23:07:20.365839] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:41.531 [2024-12-13 23:07:20.365856] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:41.531 [2024-12-13 23:07:20.365884] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:41.531 [2024-12-13 23:07:20.365897] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:41.531 [2024-12-13 23:07:20.365981] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:41.531 [2024-12-13 23:07:20.365990] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:41.531 [2024-12-13 23:07:20.365999] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:41.531 [2024-12-13 23:07:20.366009] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:41.531 [2024-12-13 23:07:20.366016] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:41.531 [2024-12-13 23:07:20.366023] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:41.531 [2024-12-13 23:07:20.366029] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:41.531 [2024-12-13 23:07:20.366035] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:41.531 [2024-12-13 23:07:20.366041] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:41.531 [2024-12-13 23:07:20.366047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.531 [2024-12-13 23:07:20.366053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:41.531 [2024-12-13 23:07:20.366059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:26:41.531 [2024-12-13 23:07:20.366065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.531 [2024-12-13 23:07:20.366128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.531 [2024-12-13 23:07:20.366137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:41.531 [2024-12-13 23:07:20.366143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:41.531 [2024-12-13 23:07:20.366149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.531 [2024-12-13 23:07:20.366221] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:41.531 [2024-12-13 23:07:20.366229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:41.531 [2024-12-13 23:07:20.366236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:41.531 [2024-12-13 23:07:20.366242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.531 [2024-12-13 23:07:20.366248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:41.531 [2024-12-13 23:07:20.366254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:41.531 [2024-12-13 23:07:20.366260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:41.531 [2024-12-13 23:07:20.366265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:41.531 [2024-12-13 23:07:20.366271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:41.531 [2024-12-13 23:07:20.366282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:41.531 [2024-12-13 23:07:20.366287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:41.531 [2024-12-13 23:07:20.366295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:41.531 [2024-12-13 23:07:20.366300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:41.531 [2024-12-13 23:07:20.366306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:41.531 [2024-12-13 23:07:20.366312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:41.531 [2024-12-13 23:07:20.366317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.531 [2024-12-13 23:07:20.366323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:41.532 [2024-12-13 23:07:20.366328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:41.532 [2024-12-13 23:07:20.366333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.532 [2024-12-13 23:07:20.366338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:41.532 [2024-12-13 23:07:20.366344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:41.532 [2024-12-13 23:07:20.366349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.532 [2024-12-13 23:07:20.366354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:41.532 [2024-12-13 23:07:20.366360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:41.532 [2024-12-13 23:07:20.366364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.532 [2024-12-13 23:07:20.366369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:41.532 [2024-12-13 23:07:20.366375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:41.532 [2024-12-13 23:07:20.366380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.532 [2024-12-13 23:07:20.366385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:41.532 [2024-12-13 23:07:20.366390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:41.532 [2024-12-13 23:07:20.366395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.532 [2024-12-13 23:07:20.366400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:41.532 [2024-12-13 23:07:20.366405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:41.532 [2024-12-13 23:07:20.366410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:41.532 [2024-12-13 23:07:20.366416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:41.532 [2024-12-13 23:07:20.366420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:41.532 [2024-12-13 23:07:20.366425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:41.532 [2024-12-13 23:07:20.366430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:41.532 [2024-12-13 23:07:20.366434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:41.532 [2024-12-13 23:07:20.366439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.532 [2024-12-13 23:07:20.366444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:41.532 [2024-12-13 23:07:20.366449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:41.532 [2024-12-13 23:07:20.366454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.532 [2024-12-13 23:07:20.366461] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:41.532 [2024-12-13 23:07:20.366467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:41.532 [2024-12-13 23:07:20.366475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:41.532 [2024-12-13 23:07:20.366481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.532 [2024-12-13 23:07:20.366487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:41.532 [2024-12-13 23:07:20.366491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:41.532 [2024-12-13 23:07:20.366497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:41.532 [2024-12-13 23:07:20.366503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:41.532 [2024-12-13 23:07:20.366508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:41.532 [2024-12-13 23:07:20.366513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:41.532 [2024-12-13 23:07:20.366519] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:41.532 [2024-12-13 23:07:20.366526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.532 [2024-12-13 23:07:20.366533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:41.532 [2024-12-13 23:07:20.366538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:41.532 [2024-12-13 23:07:20.366544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:41.532 [2024-12-13 23:07:20.366550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:41.532 [2024-12-13 23:07:20.366556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:41.532 [2024-12-13 23:07:20.366562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:41.532 [2024-12-13 23:07:20.366568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:41.532 [2024-12-13 23:07:20.366573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:41.532 [2024-12-13 23:07:20.366579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:41.532 [2024-12-13 23:07:20.366584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:41.532 [2024-12-13 23:07:20.366590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:41.532 [2024-12-13 23:07:20.366596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:41.532 [2024-12-13 23:07:20.366602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:41.532 [2024-12-13 23:07:20.366607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:41.532 [2024-12-13 23:07:20.366613] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:41.532 [2024-12-13 23:07:20.366620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.532 [2024-12-13 23:07:20.366627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:41.532 [2024-12-13 23:07:20.366633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:41.532 [2024-12-13 23:07:20.366639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:41.532 [2024-12-13 23:07:20.366645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:41.532 [2024-12-13 23:07:20.366652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.532 [2024-12-13 23:07:20.366658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:41.532 [2024-12-13 23:07:20.366664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:26:41.532 [2024-12-13 23:07:20.366670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-12-13 23:07:20.390965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.532 [2024-12-13 23:07:20.390993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:41.532 [2024-12-13 23:07:20.391001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.251 ms 00:26:41.532 [2024-12-13 23:07:20.391008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-12-13 23:07:20.391076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.532 [2024-12-13 23:07:20.391083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:41.532 [2024-12-13 23:07:20.391090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:41.532 [2024-12-13 23:07:20.391096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-12-13 23:07:20.433348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.532 [2024-12-13 23:07:20.433381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:41.532 [2024-12-13 23:07:20.433392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.211 ms 00:26:41.532 [2024-12-13 23:07:20.433399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-12-13 23:07:20.433437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.532 [2024-12-13 23:07:20.433445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:41.532 [2024-12-13 23:07:20.433452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:41.532 [2024-12-13 23:07:20.433458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-12-13 23:07:20.433886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.532 [2024-12-13 23:07:20.433906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:41.532 [2024-12-13 23:07:20.433914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:26:41.532 [2024-12-13 23:07:20.433927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-12-13 23:07:20.434031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.532 [2024-12-13 23:07:20.434039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:41.532 [2024-12-13 23:07:20.434046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:26:41.532 [2024-12-13 23:07:20.434052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-12-13 23:07:20.445932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.532 [2024-12-13 23:07:20.445957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:41.532 [2024-12-13 23:07:20.445965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.863 ms 00:26:41.532 [2024-12-13 23:07:20.445971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-12-13 23:07:20.456673] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:41.532 [2024-12-13 23:07:20.456700] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:41.532 [2024-12-13 23:07:20.456710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.532 [2024-12-13 23:07:20.456717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:41.532 [2024-12-13 23:07:20.456724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.652 ms 00:26:41.532 [2024-12-13 23:07:20.456730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-12-13 23:07:20.475618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.532 [2024-12-13 23:07:20.475646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:41.532 [2024-12-13 23:07:20.475655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.850 ms 00:26:41.532 [2024-12-13 23:07:20.475662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-12-13 23:07:20.485128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.532 [2024-12-13 23:07:20.485149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:41.532 [2024-12-13 23:07:20.485156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.436 ms 00:26:41.532 [2024-12-13 23:07:20.485162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.532 [2024-12-13 23:07:20.494097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.533 [2024-12-13 23:07:20.494122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:41.533 [2024-12-13 23:07:20.494130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.910 ms 00:26:41.533 [2024-12-13 23:07:20.494135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.533 [2024-12-13 23:07:20.494597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.533 [2024-12-13 23:07:20.494617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:41.533 [2024-12-13 23:07:20.494624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:26:41.533 [2024-12-13 23:07:20.494631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.533 [2024-12-13 23:07:20.543043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.533 [2024-12-13 23:07:20.543083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:41.533 [2024-12-13 23:07:20.543094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.396 ms 00:26:41.533 [2024-12-13 23:07:20.543101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.533 [2024-12-13 23:07:20.551687] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:41.533 [2024-12-13 23:07:20.554093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.533 [2024-12-13 23:07:20.554116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:41.533 [2024-12-13 23:07:20.554125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.956 ms 00:26:41.533 [2024-12-13 23:07:20.554136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.533 [2024-12-13 23:07:20.554197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.533 [2024-12-13 23:07:20.554206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:41.533 [2024-12-13 23:07:20.554213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:41.533 [2024-12-13 23:07:20.554219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.533 [2024-12-13 23:07:20.554291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.533 [2024-12-13 23:07:20.554300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:41.533 [2024-12-13 23:07:20.554307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:41.533 [2024-12-13 23:07:20.554313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.533 [2024-12-13 23:07:20.554333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.533 [2024-12-13 23:07:20.554340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:41.533 [2024-12-13 23:07:20.554347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:41.533 [2024-12-13 23:07:20.554354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.533 [2024-12-13 23:07:20.554384] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:41.533 [2024-12-13 23:07:20.554393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.533 [2024-12-13 23:07:20.554400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:41.533 [2024-12-13 23:07:20.554406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:41.533 [2024-12-13 23:07:20.554417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.533 [2024-12-13 23:07:20.572965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.533 [2024-12-13 23:07:20.572993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:41.533 [2024-12-13 23:07:20.573002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.534 ms 00:26:41.533 [2024-12-13 23:07:20.573009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.533 [2024-12-13 23:07:20.573067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.533 [2024-12-13 23:07:20.573074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:41.533 [2024-12-13 23:07:20.573082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:41.533 [2024-12-13 23:07:20.573088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.533 [2024-12-13 23:07:20.574039] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 230.053 ms, result 0 00:26:42.475  [2024-12-13T23:07:23.000Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-13T23:07:23.945Z] Copying: 31/1024 [MB] (11 MBps) [2024-12-13T23:07:24.888Z] Copying: 43/1024 [MB] (11 MBps) [2024-12-13T23:07:25.832Z] Copying: 55/1024 [MB] (11 MBps) [2024-12-13T23:07:26.777Z] Copying: 66/1024 [MB] (11 MBps) [2024-12-13T23:07:27.719Z] Copying: 78/1024 [MB] (11 MBps) [2024-12-13T23:07:28.662Z] Copying: 89/1024 [MB] (11 MBps) [2024-12-13T23:07:29.606Z] Copying: 100/1024 [MB] (11 MBps) [2024-12-13T23:07:30.991Z] Copying: 112/1024 [MB] (11 MBps) [2024-12-13T23:07:31.932Z] Copying: 123/1024 [MB] (11 MBps) [2024-12-13T23:07:32.875Z] Copying: 134/1024 [MB] (11 MBps) [2024-12-13T23:07:33.817Z] Copying: 146/1024 [MB] (11 MBps) [2024-12-13T23:07:34.760Z] Copying: 157/1024 [MB] (11 MBps) [2024-12-13T23:07:35.707Z] Copying: 168/1024 [MB] (10 MBps) [2024-12-13T23:07:36.650Z] Copying: 179/1024 [MB] (11 MBps) [2024-12-13T23:07:37.594Z] Copying: 190/1024 [MB] (11 MBps) [2024-12-13T23:07:38.981Z] Copying: 202/1024 [MB] (11 MBps) [2024-12-13T23:07:39.926Z] Copying: 213/1024 [MB] (11 MBps) [2024-12-13T23:07:40.871Z] Copying: 224/1024 [MB] (10 MBps) [2024-12-13T23:07:41.815Z] Copying: 235/1024 [MB] (11 MBps) [2024-12-13T23:07:42.760Z] Copying: 246/1024 [MB] (11 MBps) [2024-12-13T23:07:43.705Z] Copying: 257/1024 [MB] (10 MBps) [2024-12-13T23:07:44.650Z] Copying: 268/1024 [MB] (10 MBps) [2024-12-13T23:07:45.600Z] Copying: 280/1024 [MB] (11 MBps) [2024-12-13T23:07:47.007Z] Copying: 291/1024 [MB] (10 MBps) [2024-12-13T23:07:47.598Z] Copying: 302/1024 [MB] (11 MBps) [2024-12-13T23:07:48.986Z] Copying: 313/1024 [MB] (10 MBps) [2024-12-13T23:07:49.931Z] Copying: 324/1024 [MB] (11 MBps) [2024-12-13T23:07:50.875Z] Copying: 336/1024 [MB] (11 MBps) [2024-12-13T23:07:51.819Z] Copying: 347/1024 [MB] (11 MBps) [2024-12-13T23:07:52.763Z] Copying: 359/1024 [MB] (11 MBps) [2024-12-13T23:07:53.706Z] Copying: 370/1024 [MB] (11 MBps) [2024-12-13T23:07:54.649Z] Copying: 381/1024 [MB] (10 MBps) [2024-12-13T23:07:55.592Z] Copying: 392/1024 [MB] (11 MBps) [2024-12-13T23:07:56.978Z] Copying: 404/1024 [MB] (11 MBps) [2024-12-13T23:07:57.922Z] Copying: 415/1024 [MB] (11 MBps) [2024-12-13T23:07:58.869Z] Copying: 427/1024 [MB] (11 MBps) [2024-12-13T23:07:59.813Z] Copying: 438/1024 [MB] (11 MBps) [2024-12-13T23:08:00.758Z] Copying: 449/1024 [MB] (10 MBps) [2024-12-13T23:08:01.702Z] Copying: 461/1024 [MB] (12 MBps) [2024-12-13T23:08:02.646Z] Copying: 473/1024 [MB] (11 MBps) [2024-12-13T23:08:03.590Z] Copying: 484/1024 [MB] (11 MBps) [2024-12-13T23:08:04.978Z] Copying: 496/1024 [MB] (11 MBps) [2024-12-13T23:08:05.925Z] Copying: 508/1024 [MB] (12 MBps) [2024-12-13T23:08:06.868Z] Copying: 520/1024 [MB] (12 MBps) [2024-12-13T23:08:07.813Z] Copying: 532/1024 [MB] (12 MBps) [2024-12-13T23:08:08.757Z] Copying: 544/1024 [MB] (11 MBps) [2024-12-13T23:08:09.701Z] Copying: 556/1024 [MB] (11 MBps) [2024-12-13T23:08:10.646Z] Copying: 567/1024 [MB] (11 MBps) [2024-12-13T23:08:11.591Z] Copying: 579/1024 [MB] (11 MBps) [2024-12-13T23:08:13.037Z] Copying: 590/1024 [MB] (11 MBps) [2024-12-13T23:08:13.610Z] Copying: 601/1024 [MB] (11 MBps) [2024-12-13T23:08:14.997Z] Copying: 613/1024 [MB] (11 MBps) [2024-12-13T23:08:15.940Z] Copying: 624/1024 [MB] (11 MBps) [2024-12-13T23:08:16.884Z] Copying: 636/1024 [MB] (11 MBps) [2024-12-13T23:08:17.828Z] Copying: 647/1024 [MB] (11 MBps) [2024-12-13T23:08:18.771Z] Copying: 658/1024 [MB] (10 MBps) [2024-12-13T23:08:19.716Z] Copying: 670/1024 [MB] (11 MBps) [2024-12-13T23:08:20.661Z] Copying: 681/1024 [MB] (11 MBps) [2024-12-13T23:08:21.606Z] Copying: 693/1024 [MB] (11 MBps) [2024-12-13T23:08:22.996Z] Copying: 704/1024 [MB] (11 MBps) [2024-12-13T23:08:23.941Z] Copying: 715/1024 [MB] (11 MBps) [2024-12-13T23:08:24.883Z] Copying: 727/1024 [MB] (11 MBps) [2024-12-13T23:08:25.826Z] Copying: 739/1024 [MB] (11 MBps) [2024-12-13T23:08:26.769Z] Copying: 752/1024 [MB] (12 MBps) [2024-12-13T23:08:27.714Z] Copying: 762/1024 [MB] (10 MBps) [2024-12-13T23:08:28.658Z] Copying: 773/1024 [MB] (10 MBps) [2024-12-13T23:08:29.602Z] Copying: 786/1024 [MB] (12 MBps) [2024-12-13T23:08:30.990Z] Copying: 798/1024 [MB] (12 MBps) [2024-12-13T23:08:31.935Z] Copying: 809/1024 [MB] (11 MBps) [2024-12-13T23:08:32.879Z] Copying: 820/1024 [MB] (11 MBps) [2024-12-13T23:08:33.825Z] Copying: 831/1024 [MB] (11 MBps) [2024-12-13T23:08:34.770Z] Copying: 843/1024 [MB] (11 MBps) [2024-12-13T23:08:35.714Z] Copying: 856/1024 [MB] (12 MBps) [2024-12-13T23:08:36.659Z] Copying: 867/1024 [MB] (11 MBps) [2024-12-13T23:08:37.605Z] Copying: 879/1024 [MB] (11 MBps) [2024-12-13T23:08:39.024Z] Copying: 890/1024 [MB] (11 MBps) [2024-12-13T23:08:39.597Z] Copying: 901/1024 [MB] (11 MBps) [2024-12-13T23:08:40.985Z] Copying: 913/1024 [MB] (11 MBps) [2024-12-13T23:08:41.926Z] Copying: 924/1024 [MB] (11 MBps) [2024-12-13T23:08:42.867Z] Copying: 936/1024 [MB] (11 MBps) [2024-12-13T23:08:43.807Z] Copying: 947/1024 [MB] (11 MBps) [2024-12-13T23:08:44.750Z] Copying: 957/1024 [MB] (10 MBps) [2024-12-13T23:08:45.694Z] Copying: 968/1024 [MB] (10 MBps) [2024-12-13T23:08:46.638Z] Copying: 978/1024 [MB] (10 MBps) [2024-12-13T23:08:48.025Z] Copying: 990/1024 [MB] (11 MBps) [2024-12-13T23:08:48.598Z] Copying: 1001/1024 [MB] (11 MBps) [2024-12-13T23:08:49.984Z] Copying: 1012/1024 [MB] (10 MBps) [2024-12-13T23:08:50.245Z] Copying: 1023/1024 [MB] (10 MBps) [2024-12-13T23:08:50.245Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-13 23:08:50.026679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.105 [2024-12-13 23:08:50.026738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:11.105 [2024-12-13 23:08:50.026751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:11.105 [2024-12-13 23:08:50.026770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.105 [2024-12-13 23:08:50.028457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:11.105 [2024-12-13 23:08:50.031913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.105 [2024-12-13 23:08:50.031941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:11.105 [2024-12-13 23:08:50.031950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.428 ms 00:28:11.105 [2024-12-13 23:08:50.031963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.105 [2024-12-13 23:08:50.042033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.105 [2024-12-13 23:08:50.042062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:11.105 [2024-12-13 23:08:50.042070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.713 ms 00:28:11.105 [2024-12-13 23:08:50.042077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.105 [2024-12-13 23:08:50.061390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.105 [2024-12-13 23:08:50.061417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:11.105 [2024-12-13 23:08:50.061425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.301 ms 00:28:11.105 [2024-12-13 23:08:50.061431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.105 [2024-12-13 23:08:50.066003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.105 [2024-12-13 23:08:50.066024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:11.105 [2024-12-13 23:08:50.066033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.550 ms 00:28:11.105 [2024-12-13 23:08:50.066039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.105 [2024-12-13 23:08:50.085647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.105 [2024-12-13 23:08:50.085673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:11.105 [2024-12-13 23:08:50.085681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.574 ms 00:28:11.105 [2024-12-13 23:08:50.085687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.105 [2024-12-13 23:08:50.097298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.105 [2024-12-13 23:08:50.097324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:11.105 [2024-12-13 23:08:50.097332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.585 ms 00:28:11.105 [2024-12-13 23:08:50.097339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.367 [2024-12-13 23:08:50.305319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.367 [2024-12-13 23:08:50.305346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:11.367 [2024-12-13 23:08:50.305358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 207.952 ms 00:28:11.367 [2024-12-13 23:08:50.305365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.367 [2024-12-13 23:08:50.323805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.367 [2024-12-13 23:08:50.323837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:11.367 [2024-12-13 23:08:50.323846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.430 ms 00:28:11.367 [2024-12-13 23:08:50.323859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.367 [2024-12-13 23:08:50.341965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.367 [2024-12-13 23:08:50.341990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:11.367 [2024-12-13 23:08:50.341998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.080 ms 00:28:11.367 [2024-12-13 23:08:50.342003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.367 [2024-12-13 23:08:50.359332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.367 [2024-12-13 23:08:50.359356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:11.367 [2024-12-13 23:08:50.359363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.303 ms 00:28:11.367 [2024-12-13 23:08:50.359369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.367 [2024-12-13 23:08:50.377006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.367 [2024-12-13 23:08:50.377028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:11.367 [2024-12-13 23:08:50.377035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.594 ms 00:28:11.367 [2024-12-13 23:08:50.377041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.367 [2024-12-13 23:08:50.377065] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:11.367 [2024-12-13 23:08:50.377076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 82688 / 261120 wr_cnt: 1 state: open 00:28:11.367 [2024-12-13 23:08:50.377084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:11.367 [2024-12-13 23:08:50.377171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:11.368 [2024-12-13 23:08:50.377684] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:11.368 [2024-12-13 23:08:50.377690] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 80fd5bf1-b48b-459d-9945-757b4839e422 00:28:11.368 [2024-12-13 23:08:50.377704] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 82688 00:28:11.368 [2024-12-13 23:08:50.377710] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 83648 00:28:11.368 [2024-12-13 23:08:50.377715] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 82688 00:28:11.368 [2024-12-13 23:08:50.377722] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0116 00:28:11.368 [2024-12-13 23:08:50.377728] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:11.368 [2024-12-13 23:08:50.377735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:11.368 [2024-12-13 23:08:50.377741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:11.368 [2024-12-13 23:08:50.377746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:11.368 [2024-12-13 23:08:50.377752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:11.369 [2024-12-13 23:08:50.377767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.369 [2024-12-13 23:08:50.377774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:11.369 [2024-12-13 23:08:50.377780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:28:11.369 [2024-12-13 23:08:50.377786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.369 [2024-12-13 23:08:50.388034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.369 [2024-12-13 23:08:50.388056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:11.369 [2024-12-13 23:08:50.388065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.048 ms 00:28:11.369 [2024-12-13 23:08:50.388072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.369 [2024-12-13 23:08:50.388352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.369 [2024-12-13 23:08:50.388360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:11.369 [2024-12-13 23:08:50.388367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:28:11.369 [2024-12-13 23:08:50.388377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.369 [2024-12-13 23:08:50.415657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.369 [2024-12-13 23:08:50.415683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:11.369 [2024-12-13 23:08:50.415691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.369 [2024-12-13 23:08:50.415697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.369 [2024-12-13 23:08:50.415738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.369 [2024-12-13 23:08:50.415745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:11.369 [2024-12-13 23:08:50.415752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.369 [2024-12-13 23:08:50.415772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.369 [2024-12-13 23:08:50.415816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.369 [2024-12-13 23:08:50.415829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:11.369 [2024-12-13 23:08:50.415837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.369 [2024-12-13 23:08:50.415843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.369 [2024-12-13 23:08:50.415854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.369 [2024-12-13 23:08:50.415860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:11.369 [2024-12-13 23:08:50.415867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.369 [2024-12-13 23:08:50.415872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.369 [2024-12-13 23:08:50.478019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.369 [2024-12-13 23:08:50.478051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:11.369 [2024-12-13 23:08:50.478060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.369 [2024-12-13 23:08:50.478067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.630 [2024-12-13 23:08:50.529111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.630 [2024-12-13 23:08:50.529144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:11.630 [2024-12-13 23:08:50.529154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.630 [2024-12-13 23:08:50.529165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.630 [2024-12-13 23:08:50.529233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.630 [2024-12-13 23:08:50.529241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:11.630 [2024-12-13 23:08:50.529248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.630 [2024-12-13 23:08:50.529254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.630 [2024-12-13 23:08:50.529283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.630 [2024-12-13 23:08:50.529291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:11.630 [2024-12-13 23:08:50.529298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.630 [2024-12-13 23:08:50.529305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.630 [2024-12-13 23:08:50.529381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.630 [2024-12-13 23:08:50.529391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:11.630 [2024-12-13 23:08:50.529398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.630 [2024-12-13 23:08:50.529404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.630 [2024-12-13 23:08:50.529429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.630 [2024-12-13 23:08:50.529436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:11.630 [2024-12-13 23:08:50.529443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.630 [2024-12-13 23:08:50.529449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.630 [2024-12-13 23:08:50.529486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.630 [2024-12-13 23:08:50.529500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:11.630 [2024-12-13 23:08:50.529507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.630 [2024-12-13 23:08:50.529513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.630 [2024-12-13 23:08:50.529552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.630 [2024-12-13 23:08:50.529560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:11.630 [2024-12-13 23:08:50.529566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.630 [2024-12-13 23:08:50.529572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.630 [2024-12-13 23:08:50.529683] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.414 ms, result 0 00:28:12.573 00:28:12.573 00:28:12.573 23:08:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:15.117 23:08:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:15.117 [2024-12-13 23:08:53.849997] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:15.117 [2024-12-13 23:08:53.850117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83821 ] 00:28:15.117 [2024-12-13 23:08:54.005814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.117 [2024-12-13 23:08:54.094809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.379 [2024-12-13 23:08:54.328096] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:15.379 [2024-12-13 23:08:54.328154] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:15.379 [2024-12-13 23:08:54.484100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.379 [2024-12-13 23:08:54.484138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:15.379 [2024-12-13 23:08:54.484149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:15.379 [2024-12-13 23:08:54.484156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.379 [2024-12-13 23:08:54.484195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.379 [2024-12-13 23:08:54.484205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:15.379 [2024-12-13 23:08:54.484211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:15.379 [2024-12-13 23:08:54.484217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.379 [2024-12-13 23:08:54.484231] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:15.379 [2024-12-13 23:08:54.484803] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:15.379 [2024-12-13 23:08:54.484823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.379 [2024-12-13 23:08:54.484830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:15.379 [2024-12-13 23:08:54.484837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:28:15.379 [2024-12-13 23:08:54.484843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.379 [2024-12-13 23:08:54.486069] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:15.379 [2024-12-13 23:08:54.496792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.379 [2024-12-13 23:08:54.496819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:15.379 [2024-12-13 23:08:54.496828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.724 ms 00:28:15.379 [2024-12-13 23:08:54.496835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.379 [2024-12-13 23:08:54.496883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.379 [2024-12-13 23:08:54.496891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:15.379 [2024-12-13 23:08:54.496897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:15.379 [2024-12-13 23:08:54.496903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.379 [2024-12-13 23:08:54.503147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.379 [2024-12-13 23:08:54.503170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:15.379 [2024-12-13 23:08:54.503177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.202 ms 00:28:15.379 [2024-12-13 23:08:54.503186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.379 [2024-12-13 23:08:54.503243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.379 [2024-12-13 23:08:54.503250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:15.379 [2024-12-13 23:08:54.503256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:15.379 [2024-12-13 23:08:54.503262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.379 [2024-12-13 23:08:54.503299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.379 [2024-12-13 23:08:54.503307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:15.379 [2024-12-13 23:08:54.503313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:15.379 [2024-12-13 23:08:54.503321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.379 [2024-12-13 23:08:54.503338] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:15.379 [2024-12-13 23:08:54.506248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.379 [2024-12-13 23:08:54.506270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:15.379 [2024-12-13 23:08:54.506279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.914 ms 00:28:15.379 [2024-12-13 23:08:54.506285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.379 [2024-12-13 23:08:54.506316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.379 [2024-12-13 23:08:54.506323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:15.379 [2024-12-13 23:08:54.506330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:15.379 [2024-12-13 23:08:54.506336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.379 [2024-12-13 23:08:54.506350] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:15.379 [2024-12-13 23:08:54.506367] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:15.379 [2024-12-13 23:08:54.506394] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:15.380 [2024-12-13 23:08:54.506408] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:15.380 [2024-12-13 23:08:54.506492] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:15.380 [2024-12-13 23:08:54.506501] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:15.380 [2024-12-13 23:08:54.506510] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:15.380 [2024-12-13 23:08:54.506518] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:15.380 [2024-12-13 23:08:54.506525] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:15.380 [2024-12-13 23:08:54.506531] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:15.380 [2024-12-13 23:08:54.506537] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:15.380 [2024-12-13 23:08:54.506542] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:15.380 [2024-12-13 23:08:54.506551] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:15.380 [2024-12-13 23:08:54.506557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.380 [2024-12-13 23:08:54.506564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:15.380 [2024-12-13 23:08:54.506570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:28:15.380 [2024-12-13 23:08:54.506575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.380 [2024-12-13 23:08:54.506639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.380 [2024-12-13 23:08:54.506646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:15.380 [2024-12-13 23:08:54.506652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:15.380 [2024-12-13 23:08:54.506657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.380 [2024-12-13 23:08:54.506733] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:15.380 [2024-12-13 23:08:54.506747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:15.380 [2024-12-13 23:08:54.506764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:15.380 [2024-12-13 23:08:54.506771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.380 [2024-12-13 23:08:54.506778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:15.380 [2024-12-13 23:08:54.506784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:15.380 [2024-12-13 23:08:54.506791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:15.380 [2024-12-13 23:08:54.506796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:15.380 [2024-12-13 23:08:54.506802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:15.380 [2024-12-13 23:08:54.506807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:15.380 [2024-12-13 23:08:54.506817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:15.380 [2024-12-13 23:08:54.506822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:15.380 [2024-12-13 23:08:54.506829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:15.380 [2024-12-13 23:08:54.506840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:15.380 [2024-12-13 23:08:54.506846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:15.380 [2024-12-13 23:08:54.506851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.380 [2024-12-13 23:08:54.506857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:15.380 [2024-12-13 23:08:54.506862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:15.380 [2024-12-13 23:08:54.506867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.380 [2024-12-13 23:08:54.506873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:15.380 [2024-12-13 23:08:54.506879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:15.380 [2024-12-13 23:08:54.506885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.380 [2024-12-13 23:08:54.506890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:15.380 [2024-12-13 23:08:54.506895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:15.380 [2024-12-13 23:08:54.506900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.380 [2024-12-13 23:08:54.506905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:15.380 [2024-12-13 23:08:54.506910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:15.380 [2024-12-13 23:08:54.506915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.380 [2024-12-13 23:08:54.506920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:15.380 [2024-12-13 23:08:54.506925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:15.380 [2024-12-13 23:08:54.506930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.380 [2024-12-13 23:08:54.506936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:15.380 [2024-12-13 23:08:54.506941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:15.380 [2024-12-13 23:08:54.506946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:15.380 [2024-12-13 23:08:54.506951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:15.380 [2024-12-13 23:08:54.506956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:15.380 [2024-12-13 23:08:54.506961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:15.380 [2024-12-13 23:08:54.506966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:15.380 [2024-12-13 23:08:54.506971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:15.380 [2024-12-13 23:08:54.506976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.380 [2024-12-13 23:08:54.506981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:15.380 [2024-12-13 23:08:54.506986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:15.380 [2024-12-13 23:08:54.506994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.380 [2024-12-13 23:08:54.507000] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:15.380 [2024-12-13 23:08:54.507007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:15.380 [2024-12-13 23:08:54.507013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:15.380 [2024-12-13 23:08:54.507019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.380 [2024-12-13 23:08:54.507025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:15.380 [2024-12-13 23:08:54.507030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:15.380 [2024-12-13 23:08:54.507034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:15.380 [2024-12-13 23:08:54.507040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:15.380 [2024-12-13 23:08:54.507045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:15.380 [2024-12-13 23:08:54.507050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:15.380 [2024-12-13 23:08:54.507057] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:15.380 [2024-12-13 23:08:54.507064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.380 [2024-12-13 23:08:54.507072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:15.380 [2024-12-13 23:08:54.507077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:15.380 [2024-12-13 23:08:54.507082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:15.380 [2024-12-13 23:08:54.507088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:15.380 [2024-12-13 23:08:54.507093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:15.380 [2024-12-13 23:08:54.507098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:15.380 [2024-12-13 23:08:54.507104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:15.380 [2024-12-13 23:08:54.507110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:15.380 [2024-12-13 23:08:54.507115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:15.380 [2024-12-13 23:08:54.507120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:15.380 [2024-12-13 23:08:54.507126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:15.380 [2024-12-13 23:08:54.507131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:15.380 [2024-12-13 23:08:54.507136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:15.380 [2024-12-13 23:08:54.507142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:15.380 [2024-12-13 23:08:54.507147] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:15.380 [2024-12-13 23:08:54.507154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.380 [2024-12-13 23:08:54.507160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:15.380 [2024-12-13 23:08:54.507166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:15.380 [2024-12-13 23:08:54.507172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:15.380 [2024-12-13 23:08:54.507178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:15.380 [2024-12-13 23:08:54.507184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.381 [2024-12-13 23:08:54.507191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:15.381 [2024-12-13 23:08:54.507196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:28:15.381 [2024-12-13 23:08:54.507202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.642 [2024-12-13 23:08:54.531220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.642 [2024-12-13 23:08:54.531249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:15.642 [2024-12-13 23:08:54.531259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.976 ms 00:28:15.642 [2024-12-13 23:08:54.531269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.642 [2024-12-13 23:08:54.531338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.642 [2024-12-13 23:08:54.531345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:15.642 [2024-12-13 23:08:54.531351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:15.642 [2024-12-13 23:08:54.531357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.642 [2024-12-13 23:08:54.570413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.642 [2024-12-13 23:08:54.570444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:15.642 [2024-12-13 23:08:54.570454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.014 ms 00:28:15.642 [2024-12-13 23:08:54.570461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.642 [2024-12-13 23:08:54.570493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.642 [2024-12-13 23:08:54.570501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:15.642 [2024-12-13 23:08:54.570510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:15.642 [2024-12-13 23:08:54.570516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.642 [2024-12-13 23:08:54.570934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.642 [2024-12-13 23:08:54.570953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:15.642 [2024-12-13 23:08:54.570961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:28:15.642 [2024-12-13 23:08:54.570968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.642 [2024-12-13 23:08:54.571080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.642 [2024-12-13 23:08:54.571099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:15.642 [2024-12-13 23:08:54.571106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:28:15.642 [2024-12-13 23:08:54.571115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.642 [2024-12-13 23:08:54.582880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.582905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:15.643 [2024-12-13 23:08:54.582915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.748 ms 00:28:15.643 [2024-12-13 23:08:54.582921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.593826] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:15.643 [2024-12-13 23:08:54.593854] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:15.643 [2024-12-13 23:08:54.593864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.593871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:15.643 [2024-12-13 23:08:54.593878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.850 ms 00:28:15.643 [2024-12-13 23:08:54.593883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.612808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.612837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:15.643 [2024-12-13 23:08:54.612846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.894 ms 00:28:15.643 [2024-12-13 23:08:54.612853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.621955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.621981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:15.643 [2024-12-13 23:08:54.621989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.071 ms 00:28:15.643 [2024-12-13 23:08:54.621994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.630731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.630766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:15.643 [2024-12-13 23:08:54.630774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.710 ms 00:28:15.643 [2024-12-13 23:08:54.630780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.631244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.631261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:15.643 [2024-12-13 23:08:54.631270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:28:15.643 [2024-12-13 23:08:54.631276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.679397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.679432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:15.643 [2024-12-13 23:08:54.679446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.106 ms 00:28:15.643 [2024-12-13 23:08:54.679454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.687943] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:15.643 [2024-12-13 23:08:54.690329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.690353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:15.643 [2024-12-13 23:08:54.690363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.843 ms 00:28:15.643 [2024-12-13 23:08:54.690370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.690425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.690433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:15.643 [2024-12-13 23:08:54.690440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:15.643 [2024-12-13 23:08:54.690448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.691597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.691622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:15.643 [2024-12-13 23:08:54.691630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:28:15.643 [2024-12-13 23:08:54.691636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.691656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.691663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:15.643 [2024-12-13 23:08:54.691670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:15.643 [2024-12-13 23:08:54.691676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.691708] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:15.643 [2024-12-13 23:08:54.691716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.691723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:15.643 [2024-12-13 23:08:54.691729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:15.643 [2024-12-13 23:08:54.691735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.709932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.709960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:15.643 [2024-12-13 23:08:54.709972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.183 ms 00:28:15.643 [2024-12-13 23:08:54.709979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.710037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.643 [2024-12-13 23:08:54.710045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:15.643 [2024-12-13 23:08:54.710052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:15.643 [2024-12-13 23:08:54.710058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.643 [2024-12-13 23:08:54.710999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.520 ms, result 0 00:28:17.027  [2024-12-13T23:08:57.110Z] Copying: 1072/1048576 [kB] (1072 kBps) [2024-12-13T23:08:58.054Z] Copying: 4604/1048576 [kB] (3532 kBps) [2024-12-13T23:08:58.998Z] Copying: 25/1024 [MB] (21 MBps) [2024-12-13T23:08:59.942Z] Copying: 60/1024 [MB] (34 MBps) [2024-12-13T23:09:01.328Z] Copying: 95/1024 [MB] (34 MBps) [2024-12-13T23:09:02.271Z] Copying: 127/1024 [MB] (32 MBps) [2024-12-13T23:09:03.214Z] Copying: 153/1024 [MB] (25 MBps) [2024-12-13T23:09:04.159Z] Copying: 172/1024 [MB] (19 MBps) [2024-12-13T23:09:05.160Z] Copying: 189/1024 [MB] (16 MBps) [2024-12-13T23:09:06.104Z] Copying: 205/1024 [MB] (16 MBps) [2024-12-13T23:09:07.048Z] Copying: 223/1024 [MB] (18 MBps) [2024-12-13T23:09:07.993Z] Copying: 241/1024 [MB] (17 MBps) [2024-12-13T23:09:08.936Z] Copying: 259/1024 [MB] (17 MBps) [2024-12-13T23:09:10.323Z] Copying: 276/1024 [MB] (17 MBps) [2024-12-13T23:09:11.267Z] Copying: 293/1024 [MB] (16 MBps) [2024-12-13T23:09:12.212Z] Copying: 308/1024 [MB] (15 MBps) [2024-12-13T23:09:13.155Z] Copying: 325/1024 [MB] (16 MBps) [2024-12-13T23:09:14.099Z] Copying: 342/1024 [MB] (17 MBps) [2024-12-13T23:09:15.043Z] Copying: 360/1024 [MB] (17 MBps) [2024-12-13T23:09:15.987Z] Copying: 377/1024 [MB] (17 MBps) [2024-12-13T23:09:16.930Z] Copying: 397/1024 [MB] (19 MBps) [2024-12-13T23:09:18.317Z] Copying: 414/1024 [MB] (17 MBps) [2024-12-13T23:09:19.261Z] Copying: 430/1024 [MB] (15 MBps) [2024-12-13T23:09:20.204Z] Copying: 447/1024 [MB] (16 MBps) [2024-12-13T23:09:21.147Z] Copying: 465/1024 [MB] (17 MBps) [2024-12-13T23:09:22.091Z] Copying: 484/1024 [MB] (18 MBps) [2024-12-13T23:09:23.036Z] Copying: 502/1024 [MB] (18 MBps) [2024-12-13T23:09:23.979Z] Copying: 519/1024 [MB] (17 MBps) [2024-12-13T23:09:24.922Z] Copying: 535/1024 [MB] (16 MBps) [2024-12-13T23:09:26.319Z] Copying: 553/1024 [MB] (17 MBps) [2024-12-13T23:09:27.264Z] Copying: 571/1024 [MB] (17 MBps) [2024-12-13T23:09:28.208Z] Copying: 588/1024 [MB] (17 MBps) [2024-12-13T23:09:29.154Z] Copying: 606/1024 [MB] (17 MBps) [2024-12-13T23:09:30.104Z] Copying: 624/1024 [MB] (18 MBps) [2024-12-13T23:09:31.083Z] Copying: 642/1024 [MB] (18 MBps) [2024-12-13T23:09:32.048Z] Copying: 659/1024 [MB] (16 MBps) [2024-12-13T23:09:32.993Z] Copying: 677/1024 [MB] (17 MBps) [2024-12-13T23:09:33.938Z] Copying: 694/1024 [MB] (16 MBps) [2024-12-13T23:09:35.327Z] Copying: 710/1024 [MB] (16 MBps) [2024-12-13T23:09:36.270Z] Copying: 728/1024 [MB] (17 MBps) [2024-12-13T23:09:37.213Z] Copying: 745/1024 [MB] (17 MBps) [2024-12-13T23:09:38.157Z] Copying: 763/1024 [MB] (17 MBps) [2024-12-13T23:09:39.102Z] Copying: 779/1024 [MB] (16 MBps) [2024-12-13T23:09:40.045Z] Copying: 798/1024 [MB] (18 MBps) [2024-12-13T23:09:40.989Z] Copying: 814/1024 [MB] (16 MBps) [2024-12-13T23:09:41.932Z] Copying: 831/1024 [MB] (17 MBps) [2024-12-13T23:09:43.320Z] Copying: 857/1024 [MB] (25 MBps) [2024-12-13T23:09:44.264Z] Copying: 875/1024 [MB] (18 MBps) [2024-12-13T23:09:45.208Z] Copying: 892/1024 [MB] (16 MBps) [2024-12-13T23:09:46.150Z] Copying: 908/1024 [MB] (16 MBps) [2024-12-13T23:09:47.094Z] Copying: 925/1024 [MB] (17 MBps) [2024-12-13T23:09:48.038Z] Copying: 943/1024 [MB] (17 MBps) [2024-12-13T23:09:48.982Z] Copying: 961/1024 [MB] (17 MBps) [2024-12-13T23:09:49.926Z] Copying: 979/1024 [MB] (17 MBps) [2024-12-13T23:09:51.311Z] Copying: 997/1024 [MB] (17 MBps) [2024-12-13T23:09:51.572Z] Copying: 1013/1024 [MB] (16 MBps) [2024-12-13T23:09:52.146Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-13 23:09:51.875598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.006 [2024-12-13 23:09:51.875717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:13.006 [2024-12-13 23:09:51.875736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:13.006 [2024-12-13 23:09:51.875747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.006 [2024-12-13 23:09:51.875789] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:13.006 [2024-12-13 23:09:51.879556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.006 [2024-12-13 23:09:51.879608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:13.006 [2024-12-13 23:09:51.879622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.745 ms 00:29:13.006 [2024-12-13 23:09:51.879631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.006 [2024-12-13 23:09:51.879901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.006 [2024-12-13 23:09:51.879924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:13.006 [2024-12-13 23:09:51.879936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:29:13.006 [2024-12-13 23:09:51.879944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.006 [2024-12-13 23:09:51.896794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.006 [2024-12-13 23:09:51.896854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:13.006 [2024-12-13 23:09:51.896869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.830 ms 00:29:13.006 [2024-12-13 23:09:51.896879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.006 [2024-12-13 23:09:51.903717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.006 [2024-12-13 23:09:51.903779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:13.006 [2024-12-13 23:09:51.903800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.791 ms 00:29:13.006 [2024-12-13 23:09:51.903809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.006 [2024-12-13 23:09:51.932309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.006 [2024-12-13 23:09:51.932363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:13.006 [2024-12-13 23:09:51.932378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.432 ms 00:29:13.006 [2024-12-13 23:09:51.932387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.006 [2024-12-13 23:09:51.949850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.006 [2024-12-13 23:09:51.949901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:13.006 [2024-12-13 23:09:51.949916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.410 ms 00:29:13.006 [2024-12-13 23:09:51.949925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.006 [2024-12-13 23:09:51.954601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.006 [2024-12-13 23:09:51.954653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:13.007 [2024-12-13 23:09:51.954667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.620 ms 00:29:13.007 [2024-12-13 23:09:51.954683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.007 [2024-12-13 23:09:51.981425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.007 [2024-12-13 23:09:51.981474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:13.007 [2024-12-13 23:09:51.981486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.724 ms 00:29:13.007 [2024-12-13 23:09:51.981494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.007 [2024-12-13 23:09:52.007613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.007 [2024-12-13 23:09:52.007661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:13.007 [2024-12-13 23:09:52.007675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.071 ms 00:29:13.007 [2024-12-13 23:09:52.007683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.007 [2024-12-13 23:09:52.033096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.007 [2024-12-13 23:09:52.033145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:13.007 [2024-12-13 23:09:52.033157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.363 ms 00:29:13.007 [2024-12-13 23:09:52.033166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.007 [2024-12-13 23:09:52.058529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.007 [2024-12-13 23:09:52.058577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:13.007 [2024-12-13 23:09:52.058590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.270 ms 00:29:13.007 [2024-12-13 23:09:52.058598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.007 [2024-12-13 23:09:52.058646] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:13.007 [2024-12-13 23:09:52.058665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:13.007 [2024-12-13 23:09:52.058678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:13.007 [2024-12-13 23:09:52.058687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.058992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:13.007 [2024-12-13 23:09:52.059338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:13.008 [2024-12-13 23:09:52.059580] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:13.008 [2024-12-13 23:09:52.059592] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 80fd5bf1-b48b-459d-9945-757b4839e422 00:29:13.008 [2024-12-13 23:09:52.059604] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:13.008 [2024-12-13 23:09:52.059613] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 181952 00:29:13.008 [2024-12-13 23:09:52.059627] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 179968 00:29:13.008 [2024-12-13 23:09:52.059637] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0110 00:29:13.008 [2024-12-13 23:09:52.059646] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:13.008 [2024-12-13 23:09:52.059667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:13.008 [2024-12-13 23:09:52.059676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:13.008 [2024-12-13 23:09:52.059683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:13.008 [2024-12-13 23:09:52.059694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:13.008 [2024-12-13 23:09:52.059703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.008 [2024-12-13 23:09:52.059713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:13.008 [2024-12-13 23:09:52.059722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:29:13.008 [2024-12-13 23:09:52.059730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.008 [2024-12-13 23:09:52.074716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.008 [2024-12-13 23:09:52.074773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:13.008 [2024-12-13 23:09:52.074786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.928 ms 00:29:13.008 [2024-12-13 23:09:52.074795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.008 [2024-12-13 23:09:52.075219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.008 [2024-12-13 23:09:52.075241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:13.008 [2024-12-13 23:09:52.075255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:29:13.008 [2024-12-13 23:09:52.075266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.008 [2024-12-13 23:09:52.115128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.008 [2024-12-13 23:09:52.115178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:13.008 [2024-12-13 23:09:52.115192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.008 [2024-12-13 23:09:52.115201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.008 [2024-12-13 23:09:52.115271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.008 [2024-12-13 23:09:52.115281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:13.008 [2024-12-13 23:09:52.115290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.008 [2024-12-13 23:09:52.115300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.008 [2024-12-13 23:09:52.115396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.008 [2024-12-13 23:09:52.115410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:13.008 [2024-12-13 23:09:52.115420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.008 [2024-12-13 23:09:52.115429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.008 [2024-12-13 23:09:52.115447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.008 [2024-12-13 23:09:52.115457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:13.008 [2024-12-13 23:09:52.115464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.008 [2024-12-13 23:09:52.115472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.269 [2024-12-13 23:09:52.208906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.269 [2024-12-13 23:09:52.208966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:13.269 [2024-12-13 23:09:52.208984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.269 [2024-12-13 23:09:52.208994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.269 [2024-12-13 23:09:52.284478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.269 [2024-12-13 23:09:52.284537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:13.269 [2024-12-13 23:09:52.284553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.269 [2024-12-13 23:09:52.284562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.269 [2024-12-13 23:09:52.284633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.269 [2024-12-13 23:09:52.284652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:13.269 [2024-12-13 23:09:52.284661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.269 [2024-12-13 23:09:52.284671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.269 [2024-12-13 23:09:52.284743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.269 [2024-12-13 23:09:52.284780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:13.269 [2024-12-13 23:09:52.284791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.269 [2024-12-13 23:09:52.284801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.269 [2024-12-13 23:09:52.284920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.269 [2024-12-13 23:09:52.284934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:13.269 [2024-12-13 23:09:52.284948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.269 [2024-12-13 23:09:52.284957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.269 [2024-12-13 23:09:52.284994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.269 [2024-12-13 23:09:52.285005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:13.269 [2024-12-13 23:09:52.285015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.269 [2024-12-13 23:09:52.285024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.269 [2024-12-13 23:09:52.285077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.269 [2024-12-13 23:09:52.285089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:13.269 [2024-12-13 23:09:52.285102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.269 [2024-12-13 23:09:52.285111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.269 [2024-12-13 23:09:52.285172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.269 [2024-12-13 23:09:52.285185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:13.269 [2024-12-13 23:09:52.285195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.270 [2024-12-13 23:09:52.285204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.270 [2024-12-13 23:09:52.285370] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 409.736 ms, result 0 00:29:14.211 00:29:14.211 00:29:14.211 23:09:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:16.122 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:16.122 23:09:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:16.122 [2024-12-13 23:09:55.256687] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:16.122 [2024-12-13 23:09:55.256823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84448 ] 00:29:16.386 [2024-12-13 23:09:55.418611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.647 [2024-12-13 23:09:55.555422] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.907 [2024-12-13 23:09:55.898609] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:16.907 [2024-12-13 23:09:55.898704] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:17.172 [2024-12-13 23:09:56.065186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.172 [2024-12-13 23:09:56.065254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:17.172 [2024-12-13 23:09:56.065272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:17.172 [2024-12-13 23:09:56.065282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.172 [2024-12-13 23:09:56.065337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.172 [2024-12-13 23:09:56.065351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:17.172 [2024-12-13 23:09:56.065361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:17.172 [2024-12-13 23:09:56.065370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.172 [2024-12-13 23:09:56.065393] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:17.172 [2024-12-13 23:09:56.066170] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:17.172 [2024-12-13 23:09:56.066200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.172 [2024-12-13 23:09:56.066209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:17.172 [2024-12-13 23:09:56.066220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:29:17.172 [2024-12-13 23:09:56.066229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.172 [2024-12-13 23:09:56.068541] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:17.172 [2024-12-13 23:09:56.084391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.172 [2024-12-13 23:09:56.084442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:17.172 [2024-12-13 23:09:56.084457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.851 ms 00:29:17.172 [2024-12-13 23:09:56.084467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.172 [2024-12-13 23:09:56.084555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.172 [2024-12-13 23:09:56.084567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:17.172 [2024-12-13 23:09:56.084576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:17.172 [2024-12-13 23:09:56.084585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.172 [2024-12-13 23:09:56.096370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.172 [2024-12-13 23:09:56.096415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:17.172 [2024-12-13 23:09:56.096427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.702 ms 00:29:17.172 [2024-12-13 23:09:56.096442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.172 [2024-12-13 23:09:56.096529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.172 [2024-12-13 23:09:56.096540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:17.172 [2024-12-13 23:09:56.096550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:17.172 [2024-12-13 23:09:56.096559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.172 [2024-12-13 23:09:56.096620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.172 [2024-12-13 23:09:56.096633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:17.172 [2024-12-13 23:09:56.096644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:17.172 [2024-12-13 23:09:56.096653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.172 [2024-12-13 23:09:56.096682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:17.172 [2024-12-13 23:09:56.101554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.172 [2024-12-13 23:09:56.101598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:17.172 [2024-12-13 23:09:56.101614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:29:17.172 [2024-12-13 23:09:56.101623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.173 [2024-12-13 23:09:56.101665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.173 [2024-12-13 23:09:56.101675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:17.173 [2024-12-13 23:09:56.101684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:17.173 [2024-12-13 23:09:56.101692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.173 [2024-12-13 23:09:56.101731] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:17.173 [2024-12-13 23:09:56.101777] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:17.173 [2024-12-13 23:09:56.101822] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:17.173 [2024-12-13 23:09:56.101846] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:17.173 [2024-12-13 23:09:56.101960] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:17.173 [2024-12-13 23:09:56.101973] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:17.173 [2024-12-13 23:09:56.101986] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:17.173 [2024-12-13 23:09:56.101997] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:17.173 [2024-12-13 23:09:56.102007] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:17.173 [2024-12-13 23:09:56.102015] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:17.173 [2024-12-13 23:09:56.102024] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:17.173 [2024-12-13 23:09:56.102032] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:17.173 [2024-12-13 23:09:56.102044] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:17.173 [2024-12-13 23:09:56.102054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.173 [2024-12-13 23:09:56.102064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:17.173 [2024-12-13 23:09:56.102073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:29:17.173 [2024-12-13 23:09:56.102080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.173 [2024-12-13 23:09:56.102166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.173 [2024-12-13 23:09:56.102178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:17.173 [2024-12-13 23:09:56.102187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:17.173 [2024-12-13 23:09:56.102198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.173 [2024-12-13 23:09:56.102304] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:17.173 [2024-12-13 23:09:56.102327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:17.173 [2024-12-13 23:09:56.102337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:17.173 [2024-12-13 23:09:56.102346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:17.173 [2024-12-13 23:09:56.102364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:17.173 [2024-12-13 23:09:56.102382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:17.173 [2024-12-13 23:09:56.102390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:17.173 [2024-12-13 23:09:56.102405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:17.173 [2024-12-13 23:09:56.102416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:17.173 [2024-12-13 23:09:56.102424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:17.173 [2024-12-13 23:09:56.102443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:17.173 [2024-12-13 23:09:56.102452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:17.173 [2024-12-13 23:09:56.102459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:17.173 [2024-12-13 23:09:56.102477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:17.173 [2024-12-13 23:09:56.102484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:17.173 [2024-12-13 23:09:56.102500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.173 [2024-12-13 23:09:56.102515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:17.173 [2024-12-13 23:09:56.102523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.173 [2024-12-13 23:09:56.102538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:17.173 [2024-12-13 23:09:56.102546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.173 [2024-12-13 23:09:56.102561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:17.173 [2024-12-13 23:09:56.102568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.173 [2024-12-13 23:09:56.102585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:17.173 [2024-12-13 23:09:56.102593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:17.173 [2024-12-13 23:09:56.102605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:17.173 [2024-12-13 23:09:56.102612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:17.173 [2024-12-13 23:09:56.102618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:17.173 [2024-12-13 23:09:56.102625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:17.173 [2024-12-13 23:09:56.102632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:17.173 [2024-12-13 23:09:56.102638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:17.173 [2024-12-13 23:09:56.102657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:17.173 [2024-12-13 23:09:56.102664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102673] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:17.173 [2024-12-13 23:09:56.102682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:17.173 [2024-12-13 23:09:56.102690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:17.173 [2024-12-13 23:09:56.102698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.173 [2024-12-13 23:09:56.102706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:17.173 [2024-12-13 23:09:56.102719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:17.173 [2024-12-13 23:09:56.102726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:17.173 [2024-12-13 23:09:56.102733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:17.173 [2024-12-13 23:09:56.102740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:17.173 [2024-12-13 23:09:56.102747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:17.173 [2024-12-13 23:09:56.102781] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:17.173 [2024-12-13 23:09:56.102793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:17.173 [2024-12-13 23:09:56.102809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:17.173 [2024-12-13 23:09:56.102819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:17.173 [2024-12-13 23:09:56.102827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:17.173 [2024-12-13 23:09:56.102835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:17.173 [2024-12-13 23:09:56.102843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:17.173 [2024-12-13 23:09:56.102852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:17.173 [2024-12-13 23:09:56.102861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:17.173 [2024-12-13 23:09:56.102870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:17.173 [2024-12-13 23:09:56.102878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:17.173 [2024-12-13 23:09:56.102887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:17.173 [2024-12-13 23:09:56.102896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:17.173 [2024-12-13 23:09:56.102904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:17.173 [2024-12-13 23:09:56.102912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:17.173 [2024-12-13 23:09:56.102921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:17.173 [2024-12-13 23:09:56.102929] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:17.173 [2024-12-13 23:09:56.102938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:17.173 [2024-12-13 23:09:56.102948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:17.173 [2024-12-13 23:09:56.102956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:17.173 [2024-12-13 23:09:56.102965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:17.173 [2024-12-13 23:09:56.102975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:17.173 [2024-12-13 23:09:56.102984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.173 [2024-12-13 23:09:56.102994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:17.173 [2024-12-13 23:09:56.103002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:29:17.173 [2024-12-13 23:09:56.103009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.173 [2024-12-13 23:09:56.142315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.173 [2024-12-13 23:09:56.142367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:17.173 [2024-12-13 23:09:56.142379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.257 ms 00:29:17.173 [2024-12-13 23:09:56.142394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.173 [2024-12-13 23:09:56.142491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.173 [2024-12-13 23:09:56.142500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:17.173 [2024-12-13 23:09:56.142510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:17.173 [2024-12-13 23:09:56.142519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.173 [2024-12-13 23:09:56.193629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.173 [2024-12-13 23:09:56.193687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:17.173 [2024-12-13 23:09:56.193702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.043 ms 00:29:17.173 [2024-12-13 23:09:56.193711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.173 [2024-12-13 23:09:56.193782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.173 [2024-12-13 23:09:56.193794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:17.173 [2024-12-13 23:09:56.193809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:17.173 [2024-12-13 23:09:56.193818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.173 [2024-12-13 23:09:56.194601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.173 [2024-12-13 23:09:56.194648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:17.173 [2024-12-13 23:09:56.194660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:29:17.173 [2024-12-13 23:09:56.194669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.173 [2024-12-13 23:09:56.194868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.173 [2024-12-13 23:09:56.194882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:17.174 [2024-12-13 23:09:56.194896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:29:17.174 [2024-12-13 23:09:56.194904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.174 [2024-12-13 23:09:56.213370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.174 [2024-12-13 23:09:56.213421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:17.174 [2024-12-13 23:09:56.213435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.443 ms 00:29:17.174 [2024-12-13 23:09:56.213445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.174 [2024-12-13 23:09:56.229016] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:17.174 [2024-12-13 23:09:56.229067] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:17.174 [2024-12-13 23:09:56.229082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.174 [2024-12-13 23:09:56.229092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:17.174 [2024-12-13 23:09:56.229103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.510 ms 00:29:17.174 [2024-12-13 23:09:56.229111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.174 [2024-12-13 23:09:56.258245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.174 [2024-12-13 23:09:56.258305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:17.174 [2024-12-13 23:09:56.258320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.071 ms 00:29:17.174 [2024-12-13 23:09:56.258331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.174 [2024-12-13 23:09:56.271784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.174 [2024-12-13 23:09:56.271852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:17.174 [2024-12-13 23:09:56.271867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.366 ms 00:29:17.174 [2024-12-13 23:09:56.271876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.174 [2024-12-13 23:09:56.284533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.174 [2024-12-13 23:09:56.284582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:17.174 [2024-12-13 23:09:56.284595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.601 ms 00:29:17.174 [2024-12-13 23:09:56.284604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.174 [2024-12-13 23:09:56.285301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.174 [2024-12-13 23:09:56.285337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:17.174 [2024-12-13 23:09:56.285352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:29:17.174 [2024-12-13 23:09:56.285361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.464 [2024-12-13 23:09:56.360837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.464 [2024-12-13 23:09:56.360899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:17.465 [2024-12-13 23:09:56.360923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.454 ms 00:29:17.465 [2024-12-13 23:09:56.360933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.465 [2024-12-13 23:09:56.373782] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:17.465 [2024-12-13 23:09:56.378005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.465 [2024-12-13 23:09:56.378051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:17.465 [2024-12-13 23:09:56.378066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.012 ms 00:29:17.465 [2024-12-13 23:09:56.378075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.465 [2024-12-13 23:09:56.378170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.465 [2024-12-13 23:09:56.378182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:17.465 [2024-12-13 23:09:56.378193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:17.465 [2024-12-13 23:09:56.378205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.465 [2024-12-13 23:09:56.379351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.465 [2024-12-13 23:09:56.379401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:17.465 [2024-12-13 23:09:56.379413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:29:17.465 [2024-12-13 23:09:56.379422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.465 [2024-12-13 23:09:56.379454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.465 [2024-12-13 23:09:56.379464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:17.465 [2024-12-13 23:09:56.379475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:17.465 [2024-12-13 23:09:56.379499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.465 [2024-12-13 23:09:56.379552] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:17.465 [2024-12-13 23:09:56.379565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.465 [2024-12-13 23:09:56.379575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:17.465 [2024-12-13 23:09:56.379587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:17.465 [2024-12-13 23:09:56.379597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.465 [2024-12-13 23:09:56.406320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.465 [2024-12-13 23:09:56.406374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:17.465 [2024-12-13 23:09:56.406395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.700 ms 00:29:17.465 [2024-12-13 23:09:56.406405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.465 [2024-12-13 23:09:56.406498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.465 [2024-12-13 23:09:56.406511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:17.465 [2024-12-13 23:09:56.406521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:17.465 [2024-12-13 23:09:56.406531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.465 [2024-12-13 23:09:56.408124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 342.307 ms, result 0 00:29:18.848  [2024-12-13T23:09:58.932Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-13T23:09:59.876Z] Copying: 37/1024 [MB] (11 MBps) [2024-12-13T23:10:00.818Z] Copying: 49/1024 [MB] (11 MBps) [2024-12-13T23:10:01.761Z] Copying: 64/1024 [MB] (15 MBps) [2024-12-13T23:10:02.706Z] Copying: 91/1024 [MB] (27 MBps) [2024-12-13T23:10:03.648Z] Copying: 113/1024 [MB] (22 MBps) [2024-12-13T23:10:05.036Z] Copying: 134/1024 [MB] (20 MBps) [2024-12-13T23:10:05.609Z] Copying: 149/1024 [MB] (15 MBps) [2024-12-13T23:10:06.996Z] Copying: 161/1024 [MB] (11 MBps) [2024-12-13T23:10:07.942Z] Copying: 173/1024 [MB] (11 MBps) [2024-12-13T23:10:08.886Z] Copying: 184/1024 [MB] (11 MBps) [2024-12-13T23:10:09.830Z] Copying: 196/1024 [MB] (11 MBps) [2024-12-13T23:10:10.774Z] Copying: 206/1024 [MB] (10 MBps) [2024-12-13T23:10:11.719Z] Copying: 221584/1048576 [kB] (10192 kBps) [2024-12-13T23:10:12.663Z] Copying: 227/1024 [MB] (11 MBps) [2024-12-13T23:10:13.607Z] Copying: 239/1024 [MB] (11 MBps) [2024-12-13T23:10:14.995Z] Copying: 251/1024 [MB] (11 MBps) [2024-12-13T23:10:15.937Z] Copying: 262/1024 [MB] (11 MBps) [2024-12-13T23:10:16.889Z] Copying: 273/1024 [MB] (11 MBps) [2024-12-13T23:10:17.835Z] Copying: 285/1024 [MB] (11 MBps) [2024-12-13T23:10:18.780Z] Copying: 296/1024 [MB] (11 MBps) [2024-12-13T23:10:19.721Z] Copying: 308/1024 [MB] (11 MBps) [2024-12-13T23:10:20.669Z] Copying: 320/1024 [MB] (11 MBps) [2024-12-13T23:10:21.612Z] Copying: 331/1024 [MB] (11 MBps) [2024-12-13T23:10:23.041Z] Copying: 342/1024 [MB] (10 MBps) [2024-12-13T23:10:23.620Z] Copying: 353/1024 [MB] (11 MBps) [2024-12-13T23:10:25.007Z] Copying: 365/1024 [MB] (11 MBps) [2024-12-13T23:10:25.951Z] Copying: 377/1024 [MB] (11 MBps) [2024-12-13T23:10:26.895Z] Copying: 388/1024 [MB] (11 MBps) [2024-12-13T23:10:27.840Z] Copying: 400/1024 [MB] (11 MBps) [2024-12-13T23:10:28.784Z] Copying: 411/1024 [MB] (11 MBps) [2024-12-13T23:10:29.729Z] Copying: 423/1024 [MB] (11 MBps) [2024-12-13T23:10:30.672Z] Copying: 434/1024 [MB] (10 MBps) [2024-12-13T23:10:31.613Z] Copying: 445/1024 [MB] (10 MBps) [2024-12-13T23:10:33.000Z] Copying: 456/1024 [MB] (11 MBps) [2024-12-13T23:10:33.944Z] Copying: 468/1024 [MB] (12 MBps) [2024-12-13T23:10:34.889Z] Copying: 480/1024 [MB] (11 MBps) [2024-12-13T23:10:35.833Z] Copying: 491/1024 [MB] (11 MBps) [2024-12-13T23:10:36.778Z] Copying: 502/1024 [MB] (10 MBps) [2024-12-13T23:10:37.723Z] Copying: 513/1024 [MB] (11 MBps) [2024-12-13T23:10:38.668Z] Copying: 524/1024 [MB] (11 MBps) [2024-12-13T23:10:39.613Z] Copying: 536/1024 [MB] (11 MBps) [2024-12-13T23:10:41.001Z] Copying: 547/1024 [MB] (11 MBps) [2024-12-13T23:10:41.944Z] Copying: 561/1024 [MB] (14 MBps) [2024-12-13T23:10:42.887Z] Copying: 573/1024 [MB] (11 MBps) [2024-12-13T23:10:43.829Z] Copying: 584/1024 [MB] (11 MBps) [2024-12-13T23:10:44.775Z] Copying: 596/1024 [MB] (11 MBps) [2024-12-13T23:10:45.716Z] Copying: 607/1024 [MB] (11 MBps) [2024-12-13T23:10:46.660Z] Copying: 619/1024 [MB] (11 MBps) [2024-12-13T23:10:47.605Z] Copying: 631/1024 [MB] (11 MBps) [2024-12-13T23:10:49.037Z] Copying: 643/1024 [MB] (11 MBps) [2024-12-13T23:10:49.634Z] Copying: 654/1024 [MB] (11 MBps) [2024-12-13T23:10:51.020Z] Copying: 666/1024 [MB] (11 MBps) [2024-12-13T23:10:51.965Z] Copying: 677/1024 [MB] (11 MBps) [2024-12-13T23:10:52.911Z] Copying: 689/1024 [MB] (11 MBps) [2024-12-13T23:10:53.856Z] Copying: 701/1024 [MB] (11 MBps) [2024-12-13T23:10:54.801Z] Copying: 712/1024 [MB] (11 MBps) [2024-12-13T23:10:55.744Z] Copying: 724/1024 [MB] (12 MBps) [2024-12-13T23:10:56.688Z] Copying: 736/1024 [MB] (12 MBps) [2024-12-13T23:10:57.632Z] Copying: 748/1024 [MB] (11 MBps) [2024-12-13T23:10:59.020Z] Copying: 762/1024 [MB] (14 MBps) [2024-12-13T23:10:59.964Z] Copying: 772/1024 [MB] (10 MBps) [2024-12-13T23:11:00.907Z] Copying: 784/1024 [MB] (11 MBps) [2024-12-13T23:11:01.852Z] Copying: 795/1024 [MB] (11 MBps) [2024-12-13T23:11:02.796Z] Copying: 805/1024 [MB] (10 MBps) [2024-12-13T23:11:03.741Z] Copying: 816/1024 [MB] (10 MBps) [2024-12-13T23:11:04.684Z] Copying: 827/1024 [MB] (11 MBps) [2024-12-13T23:11:05.625Z] Copying: 842/1024 [MB] (14 MBps) [2024-12-13T23:11:07.020Z] Copying: 853/1024 [MB] (11 MBps) [2024-12-13T23:11:07.963Z] Copying: 865/1024 [MB] (11 MBps) [2024-12-13T23:11:08.908Z] Copying: 876/1024 [MB] (11 MBps) [2024-12-13T23:11:09.851Z] Copying: 888/1024 [MB] (11 MBps) [2024-12-13T23:11:10.797Z] Copying: 899/1024 [MB] (11 MBps) [2024-12-13T23:11:11.740Z] Copying: 910/1024 [MB] (10 MBps) [2024-12-13T23:11:12.683Z] Copying: 921/1024 [MB] (11 MBps) [2024-12-13T23:11:13.626Z] Copying: 933/1024 [MB] (11 MBps) [2024-12-13T23:11:15.028Z] Copying: 944/1024 [MB] (11 MBps) [2024-12-13T23:11:15.644Z] Copying: 956/1024 [MB] (11 MBps) [2024-12-13T23:11:17.030Z] Copying: 967/1024 [MB] (11 MBps) [2024-12-13T23:11:17.602Z] Copying: 979/1024 [MB] (11 MBps) [2024-12-13T23:11:18.989Z] Copying: 991/1024 [MB] (11 MBps) [2024-12-13T23:11:19.933Z] Copying: 1003/1024 [MB] (11 MBps) [2024-12-13T23:11:20.506Z] Copying: 1014/1024 [MB] (11 MBps) [2024-12-13T23:11:20.768Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-13 23:11:20.604431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.628 [2024-12-13 23:11:20.604500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:41.628 [2024-12-13 23:11:20.604515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:41.629 [2024-12-13 23:11:20.604523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.604545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:41.629 [2024-12-13 23:11:20.607412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.629 [2024-12-13 23:11:20.607452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:41.629 [2024-12-13 23:11:20.607463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.852 ms 00:30:41.629 [2024-12-13 23:11:20.607482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.607707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.629 [2024-12-13 23:11:20.607719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:41.629 [2024-12-13 23:11:20.607728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:30:41.629 [2024-12-13 23:11:20.607737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.612035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.629 [2024-12-13 23:11:20.612062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:41.629 [2024-12-13 23:11:20.612073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.285 ms 00:30:41.629 [2024-12-13 23:11:20.612085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.618507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.629 [2024-12-13 23:11:20.618539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:41.629 [2024-12-13 23:11:20.618549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.405 ms 00:30:41.629 [2024-12-13 23:11:20.618557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.639366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.629 [2024-12-13 23:11:20.639393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:41.629 [2024-12-13 23:11:20.639403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.757 ms 00:30:41.629 [2024-12-13 23:11:20.639409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.651137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.629 [2024-12-13 23:11:20.651165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:41.629 [2024-12-13 23:11:20.651174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.700 ms 00:30:41.629 [2024-12-13 23:11:20.651182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.654997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.629 [2024-12-13 23:11:20.655023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:41.629 [2024-12-13 23:11:20.655031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.781 ms 00:30:41.629 [2024-12-13 23:11:20.655037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.673485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.629 [2024-12-13 23:11:20.673510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:41.629 [2024-12-13 23:11:20.673518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.436 ms 00:30:41.629 [2024-12-13 23:11:20.673523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.691545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.629 [2024-12-13 23:11:20.691570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:41.629 [2024-12-13 23:11:20.691578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.997 ms 00:30:41.629 [2024-12-13 23:11:20.691584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.709096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.629 [2024-12-13 23:11:20.709122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:41.629 [2024-12-13 23:11:20.709129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.487 ms 00:30:41.629 [2024-12-13 23:11:20.709135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.726794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.629 [2024-12-13 23:11:20.726825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:41.629 [2024-12-13 23:11:20.726833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.606 ms 00:30:41.629 [2024-12-13 23:11:20.726838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.629 [2024-12-13 23:11:20.726864] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:41.629 [2024-12-13 23:11:20.726881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:41.629 [2024-12-13 23:11:20.726892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:41.629 [2024-12-13 23:11:20.726898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.726999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:41.629 [2024-12-13 23:11:20.727182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:41.630 [2024-12-13 23:11:20.727497] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:41.630 [2024-12-13 23:11:20.727503] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 80fd5bf1-b48b-459d-9945-757b4839e422 00:30:41.630 [2024-12-13 23:11:20.727510] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:41.630 [2024-12-13 23:11:20.727516] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:41.630 [2024-12-13 23:11:20.727522] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:41.630 [2024-12-13 23:11:20.727529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:41.630 [2024-12-13 23:11:20.727541] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:41.630 [2024-12-13 23:11:20.727549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:41.630 [2024-12-13 23:11:20.727555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:41.630 [2024-12-13 23:11:20.727560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:41.630 [2024-12-13 23:11:20.727565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:41.630 [2024-12-13 23:11:20.727570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.630 [2024-12-13 23:11:20.727576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:41.630 [2024-12-13 23:11:20.727583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:30:41.630 [2024-12-13 23:11:20.727592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.630 [2024-12-13 23:11:20.737892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.630 [2024-12-13 23:11:20.737916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:41.630 [2024-12-13 23:11:20.737924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.287 ms 00:30:41.630 [2024-12-13 23:11:20.737931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.630 [2024-12-13 23:11:20.738220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.630 [2024-12-13 23:11:20.738237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:41.630 [2024-12-13 23:11:20.738244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:30:41.630 [2024-12-13 23:11:20.738250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.891 [2024-12-13 23:11:20.765643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.891 [2024-12-13 23:11:20.765670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:41.891 [2024-12-13 23:11:20.765678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.891 [2024-12-13 23:11:20.765685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.891 [2024-12-13 23:11:20.765729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.891 [2024-12-13 23:11:20.765739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:41.891 [2024-12-13 23:11:20.765746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.891 [2024-12-13 23:11:20.765751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.891 [2024-12-13 23:11:20.765806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.891 [2024-12-13 23:11:20.765815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:41.891 [2024-12-13 23:11:20.765821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.891 [2024-12-13 23:11:20.765827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.891 [2024-12-13 23:11:20.765839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.891 [2024-12-13 23:11:20.765845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:41.891 [2024-12-13 23:11:20.765854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.891 [2024-12-13 23:11:20.765860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.891 [2024-12-13 23:11:20.829105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.891 [2024-12-13 23:11:20.829145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:41.891 [2024-12-13 23:11:20.829154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.891 [2024-12-13 23:11:20.829161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.891 [2024-12-13 23:11:20.880857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.891 [2024-12-13 23:11:20.880895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:41.892 [2024-12-13 23:11:20.880904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.892 [2024-12-13 23:11:20.880910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.892 [2024-12-13 23:11:20.880971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.892 [2024-12-13 23:11:20.880979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:41.892 [2024-12-13 23:11:20.880986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.892 [2024-12-13 23:11:20.880992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.892 [2024-12-13 23:11:20.881022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.892 [2024-12-13 23:11:20.881030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:41.892 [2024-12-13 23:11:20.881037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.892 [2024-12-13 23:11:20.881045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.892 [2024-12-13 23:11:20.881119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.892 [2024-12-13 23:11:20.881128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:41.892 [2024-12-13 23:11:20.881134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.892 [2024-12-13 23:11:20.881141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.892 [2024-12-13 23:11:20.881165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.892 [2024-12-13 23:11:20.881173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:41.892 [2024-12-13 23:11:20.881180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.892 [2024-12-13 23:11:20.881186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.892 [2024-12-13 23:11:20.881221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.892 [2024-12-13 23:11:20.881228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:41.892 [2024-12-13 23:11:20.881235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.892 [2024-12-13 23:11:20.881241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.892 [2024-12-13 23:11:20.881277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.892 [2024-12-13 23:11:20.881286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:41.892 [2024-12-13 23:11:20.881292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.892 [2024-12-13 23:11:20.881300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.892 [2024-12-13 23:11:20.881406] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.962 ms, result 0 00:30:42.463 00:30:42.463 00:30:42.463 23:11:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:45.010 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82221 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82221 ']' 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 82221 00:30:45.010 Process with pid 82221 is not found 00:30:45.010 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82221) - No such process 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 82221 is not found' 00:30:45.010 23:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:45.010 Remove shared memory files 00:30:45.010 23:11:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:45.010 23:11:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:45.010 23:11:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:45.010 23:11:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:45.010 23:11:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:45.010 23:11:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:45.010 23:11:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:45.010 ************************************ 00:30:45.010 END TEST ftl_dirty_shutdown 00:30:45.010 ************************************ 00:30:45.010 00:30:45.010 real 4m59.604s 00:30:45.010 user 5m12.064s 00:30:45.010 sys 0m24.097s 00:30:45.010 23:11:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.010 23:11:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:45.272 23:11:24 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:45.272 23:11:24 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:45.272 23:11:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.272 23:11:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:45.272 ************************************ 00:30:45.272 START TEST ftl_upgrade_shutdown 00:30:45.272 ************************************ 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:45.272 * Looking for test storage... 00:30:45.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:45.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.272 --rc genhtml_branch_coverage=1 00:30:45.272 --rc genhtml_function_coverage=1 00:30:45.272 --rc genhtml_legend=1 00:30:45.272 --rc geninfo_all_blocks=1 00:30:45.272 --rc geninfo_unexecuted_blocks=1 00:30:45.272 00:30:45.272 ' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:45.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.272 --rc genhtml_branch_coverage=1 00:30:45.272 --rc genhtml_function_coverage=1 00:30:45.272 --rc genhtml_legend=1 00:30:45.272 --rc geninfo_all_blocks=1 00:30:45.272 --rc geninfo_unexecuted_blocks=1 00:30:45.272 00:30:45.272 ' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:45.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.272 --rc genhtml_branch_coverage=1 00:30:45.272 --rc genhtml_function_coverage=1 00:30:45.272 --rc genhtml_legend=1 00:30:45.272 --rc geninfo_all_blocks=1 00:30:45.272 --rc geninfo_unexecuted_blocks=1 00:30:45.272 00:30:45.272 ' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:45.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.272 --rc genhtml_branch_coverage=1 00:30:45.272 --rc genhtml_function_coverage=1 00:30:45.272 --rc genhtml_legend=1 00:30:45.272 --rc geninfo_all_blocks=1 00:30:45.272 --rc geninfo_unexecuted_blocks=1 00:30:45.272 00:30:45.272 ' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:45.272 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85420 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85420 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85420 ']' 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:45.273 23:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:45.534 [2024-12-13 23:11:24.449023] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:45.534 [2024-12-13 23:11:24.449168] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85420 ] 00:30:45.534 [2024-12-13 23:11:24.613270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.795 [2024-12-13 23:11:24.725366] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:46.372 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:46.633 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:46.633 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:46.633 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:46.633 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:46.633 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:46.633 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:46.633 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:46.633 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:46.894 { 00:30:46.894 "name": "basen1", 00:30:46.894 "aliases": [ 00:30:46.894 "df3d116f-0e2e-49e6-975a-488a5d7c55bf" 00:30:46.894 ], 00:30:46.894 "product_name": "NVMe disk", 00:30:46.894 "block_size": 4096, 00:30:46.894 "num_blocks": 1310720, 00:30:46.894 "uuid": "df3d116f-0e2e-49e6-975a-488a5d7c55bf", 00:30:46.894 "numa_id": -1, 00:30:46.894 "assigned_rate_limits": { 00:30:46.894 "rw_ios_per_sec": 0, 00:30:46.894 "rw_mbytes_per_sec": 0, 00:30:46.894 "r_mbytes_per_sec": 0, 00:30:46.894 "w_mbytes_per_sec": 0 00:30:46.894 }, 00:30:46.894 "claimed": true, 00:30:46.894 "claim_type": "read_many_write_one", 00:30:46.894 "zoned": false, 00:30:46.894 "supported_io_types": { 00:30:46.894 "read": true, 00:30:46.894 "write": true, 00:30:46.894 "unmap": true, 00:30:46.894 "flush": true, 00:30:46.894 "reset": true, 00:30:46.894 "nvme_admin": true, 00:30:46.894 "nvme_io": true, 00:30:46.894 "nvme_io_md": false, 00:30:46.894 "write_zeroes": true, 00:30:46.894 "zcopy": false, 00:30:46.894 "get_zone_info": false, 00:30:46.894 "zone_management": false, 00:30:46.894 "zone_append": false, 00:30:46.894 "compare": true, 00:30:46.894 "compare_and_write": false, 00:30:46.894 "abort": true, 00:30:46.894 "seek_hole": false, 00:30:46.894 "seek_data": false, 00:30:46.894 "copy": true, 00:30:46.894 "nvme_iov_md": false 00:30:46.894 }, 00:30:46.894 "driver_specific": { 00:30:46.894 "nvme": [ 00:30:46.894 { 00:30:46.894 "pci_address": "0000:00:11.0", 00:30:46.894 "trid": { 00:30:46.894 "trtype": "PCIe", 00:30:46.894 "traddr": "0000:00:11.0" 00:30:46.894 }, 00:30:46.894 "ctrlr_data": { 00:30:46.894 "cntlid": 0, 00:30:46.894 "vendor_id": "0x1b36", 00:30:46.894 "model_number": "QEMU NVMe Ctrl", 00:30:46.894 "serial_number": "12341", 00:30:46.894 "firmware_revision": "8.0.0", 00:30:46.894 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:46.894 "oacs": { 00:30:46.894 "security": 0, 00:30:46.894 "format": 1, 00:30:46.894 "firmware": 0, 00:30:46.894 "ns_manage": 1 00:30:46.894 }, 00:30:46.894 "multi_ctrlr": false, 00:30:46.894 "ana_reporting": false 00:30:46.894 }, 00:30:46.894 "vs": { 00:30:46.894 "nvme_version": "1.4" 00:30:46.894 }, 00:30:46.894 "ns_data": { 00:30:46.894 "id": 1, 00:30:46.894 "can_share": false 00:30:46.894 } 00:30:46.894 } 00:30:46.894 ], 00:30:46.894 "mp_policy": "active_passive" 00:30:46.894 } 00:30:46.894 } 00:30:46.894 ]' 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:46.894 23:11:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:47.156 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a02ee0f1-ea3b-4d15-b126-74af5d1ab49b 00:30:47.156 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:47.156 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a02ee0f1-ea3b-4d15-b126-74af5d1ab49b 00:30:47.417 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:47.417 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=45c7ffc1-28d5-4d73-9efd-9a170f29db95 00:30:47.417 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 45c7ffc1-28d5-4d73-9efd-9a170f29db95 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=23999515-2f44-4730-ab49-89ff24bf2a6a 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 23999515-2f44-4730-ab49-89ff24bf2a6a ]] 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 23999515-2f44-4730-ab49-89ff24bf2a6a 5120 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=23999515-2f44-4730-ab49-89ff24bf2a6a 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 23999515-2f44-4730-ab49-89ff24bf2a6a 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=23999515-2f44-4730-ab49-89ff24bf2a6a 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:47.678 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 23999515-2f44-4730-ab49-89ff24bf2a6a 00:30:47.939 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:47.939 { 00:30:47.939 "name": "23999515-2f44-4730-ab49-89ff24bf2a6a", 00:30:47.939 "aliases": [ 00:30:47.939 "lvs/basen1p0" 00:30:47.939 ], 00:30:47.939 "product_name": "Logical Volume", 00:30:47.939 "block_size": 4096, 00:30:47.939 "num_blocks": 5242880, 00:30:47.939 "uuid": "23999515-2f44-4730-ab49-89ff24bf2a6a", 00:30:47.939 "assigned_rate_limits": { 00:30:47.939 "rw_ios_per_sec": 0, 00:30:47.939 "rw_mbytes_per_sec": 0, 00:30:47.939 "r_mbytes_per_sec": 0, 00:30:47.939 "w_mbytes_per_sec": 0 00:30:47.939 }, 00:30:47.939 "claimed": false, 00:30:47.939 "zoned": false, 00:30:47.939 "supported_io_types": { 00:30:47.939 "read": true, 00:30:47.939 "write": true, 00:30:47.939 "unmap": true, 00:30:47.939 "flush": false, 00:30:47.939 "reset": true, 00:30:47.939 "nvme_admin": false, 00:30:47.939 "nvme_io": false, 00:30:47.939 "nvme_io_md": false, 00:30:47.939 "write_zeroes": true, 00:30:47.939 "zcopy": false, 00:30:47.939 "get_zone_info": false, 00:30:47.939 "zone_management": false, 00:30:47.939 "zone_append": false, 00:30:47.939 "compare": false, 00:30:47.939 "compare_and_write": false, 00:30:47.939 "abort": false, 00:30:47.939 "seek_hole": true, 00:30:47.939 "seek_data": true, 00:30:47.939 "copy": false, 00:30:47.939 "nvme_iov_md": false 00:30:47.939 }, 00:30:47.939 "driver_specific": { 00:30:47.939 "lvol": { 00:30:47.939 "lvol_store_uuid": "45c7ffc1-28d5-4d73-9efd-9a170f29db95", 00:30:47.939 "base_bdev": "basen1", 00:30:47.939 "thin_provision": true, 00:30:47.939 "num_allocated_clusters": 0, 00:30:47.939 "snapshot": false, 00:30:47.939 "clone": false, 00:30:47.939 "esnap_clone": false 00:30:47.939 } 00:30:47.939 } 00:30:47.939 } 00:30:47.939 ]' 00:30:47.939 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:47.939 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:47.939 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:47.939 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:47.939 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:47.939 23:11:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:47.939 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:47.939 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:47.939 23:11:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:48.201 23:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:48.201 23:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:48.201 23:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:48.462 23:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:48.462 23:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:48.462 23:11:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 23999515-2f44-4730-ab49-89ff24bf2a6a -c cachen1p0 --l2p_dram_limit 2 00:30:48.725 [2024-12-13 23:11:27.637112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.725 [2024-12-13 23:11:27.637156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:48.725 [2024-12-13 23:11:27.637169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:48.725 [2024-12-13 23:11:27.637177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.725 [2024-12-13 23:11:27.637221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.725 [2024-12-13 23:11:27.637229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:48.725 [2024-12-13 23:11:27.637237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:48.725 [2024-12-13 23:11:27.637243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.725 [2024-12-13 23:11:27.637260] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:48.725 [2024-12-13 23:11:27.637786] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:48.725 [2024-12-13 23:11:27.637809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.725 [2024-12-13 23:11:27.637815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:48.725 [2024-12-13 23:11:27.637825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.550 ms 00:30:48.725 [2024-12-13 23:11:27.637831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.725 [2024-12-13 23:11:27.637878] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 03ed4ae9-9145-4587-ac47-c5a16662e063 00:30:48.725 [2024-12-13 23:11:27.639134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.725 [2024-12-13 23:11:27.639158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:48.725 [2024-12-13 23:11:27.639166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:48.725 [2024-12-13 23:11:27.639175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.725 [2024-12-13 23:11:27.645982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.725 [2024-12-13 23:11:27.646014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:48.725 [2024-12-13 23:11:27.646021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.774 ms 00:30:48.725 [2024-12-13 23:11:27.646029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.725 [2024-12-13 23:11:27.646060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.725 [2024-12-13 23:11:27.646069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:48.725 [2024-12-13 23:11:27.646075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:48.725 [2024-12-13 23:11:27.646085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.725 [2024-12-13 23:11:27.646126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.725 [2024-12-13 23:11:27.646135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:48.725 [2024-12-13 23:11:27.646142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:48.725 [2024-12-13 23:11:27.646155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.725 [2024-12-13 23:11:27.646171] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:48.725 [2024-12-13 23:11:27.649426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.725 [2024-12-13 23:11:27.649452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:48.725 [2024-12-13 23:11:27.649462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.256 ms 00:30:48.725 [2024-12-13 23:11:27.649468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.725 [2024-12-13 23:11:27.649493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.725 [2024-12-13 23:11:27.649500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:48.725 [2024-12-13 23:11:27.649508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:48.725 [2024-12-13 23:11:27.649515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.725 [2024-12-13 23:11:27.649528] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:48.725 [2024-12-13 23:11:27.649639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:48.725 [2024-12-13 23:11:27.649652] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:48.725 [2024-12-13 23:11:27.649661] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:48.725 [2024-12-13 23:11:27.649670] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:48.725 [2024-12-13 23:11:27.649677] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:48.725 [2024-12-13 23:11:27.649685] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:48.725 [2024-12-13 23:11:27.649690] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:48.725 [2024-12-13 23:11:27.649701] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:48.725 [2024-12-13 23:11:27.649707] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:48.725 [2024-12-13 23:11:27.649714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.725 [2024-12-13 23:11:27.649720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:48.725 [2024-12-13 23:11:27.649728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:30:48.725 [2024-12-13 23:11:27.649733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.725 [2024-12-13 23:11:27.649813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.725 [2024-12-13 23:11:27.649826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:48.725 [2024-12-13 23:11:27.649833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:30:48.725 [2024-12-13 23:11:27.649839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.725 [2024-12-13 23:11:27.649916] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:48.725 [2024-12-13 23:11:27.649923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:48.725 [2024-12-13 23:11:27.649931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:48.725 [2024-12-13 23:11:27.649937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.725 [2024-12-13 23:11:27.649944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:48.725 [2024-12-13 23:11:27.649949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:48.725 [2024-12-13 23:11:27.649956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:48.725 [2024-12-13 23:11:27.649962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:48.725 [2024-12-13 23:11:27.649969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:48.725 [2024-12-13 23:11:27.649974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.725 [2024-12-13 23:11:27.649981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:48.725 [2024-12-13 23:11:27.649986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:48.725 [2024-12-13 23:11:27.649993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.725 [2024-12-13 23:11:27.649999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:48.725 [2024-12-13 23:11:27.650005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:48.725 [2024-12-13 23:11:27.650012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.725 [2024-12-13 23:11:27.650021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:48.725 [2024-12-13 23:11:27.650026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:48.725 [2024-12-13 23:11:27.650033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.725 [2024-12-13 23:11:27.650039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:48.725 [2024-12-13 23:11:27.650045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:48.725 [2024-12-13 23:11:27.650051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:48.725 [2024-12-13 23:11:27.650057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:48.725 [2024-12-13 23:11:27.650062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:48.725 [2024-12-13 23:11:27.650069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:48.725 [2024-12-13 23:11:27.650074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:48.726 [2024-12-13 23:11:27.650080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:48.726 [2024-12-13 23:11:27.650085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:48.726 [2024-12-13 23:11:27.650091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:48.726 [2024-12-13 23:11:27.650096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:48.726 [2024-12-13 23:11:27.650102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:48.726 [2024-12-13 23:11:27.650107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:48.726 [2024-12-13 23:11:27.650115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:48.726 [2024-12-13 23:11:27.650120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.726 [2024-12-13 23:11:27.650126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:48.726 [2024-12-13 23:11:27.650131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:48.726 [2024-12-13 23:11:27.650137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.726 [2024-12-13 23:11:27.650142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:48.726 [2024-12-13 23:11:27.650150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:48.726 [2024-12-13 23:11:27.650155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.726 [2024-12-13 23:11:27.650163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:48.726 [2024-12-13 23:11:27.650167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:48.726 [2024-12-13 23:11:27.650174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.726 [2024-12-13 23:11:27.650178] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:48.726 [2024-12-13 23:11:27.650186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:48.726 [2024-12-13 23:11:27.650191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:48.726 [2024-12-13 23:11:27.650198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.726 [2024-12-13 23:11:27.650207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:48.726 [2024-12-13 23:11:27.650216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:48.726 [2024-12-13 23:11:27.650223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:48.726 [2024-12-13 23:11:27.650230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:48.726 [2024-12-13 23:11:27.650236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:48.726 [2024-12-13 23:11:27.650242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:48.726 [2024-12-13 23:11:27.650249] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:48.726 [2024-12-13 23:11:27.650258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:48.726 [2024-12-13 23:11:27.650266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:48.726 [2024-12-13 23:11:27.650273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:48.726 [2024-12-13 23:11:27.650279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:48.726 [2024-12-13 23:11:27.650286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:48.726 [2024-12-13 23:11:27.650291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:48.726 [2024-12-13 23:11:27.650298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:48.726 [2024-12-13 23:11:27.650303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:48.726 [2024-12-13 23:11:27.650310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:48.726 [2024-12-13 23:11:27.650315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:48.726 [2024-12-13 23:11:27.650325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:48.726 [2024-12-13 23:11:27.650330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:48.726 [2024-12-13 23:11:27.650337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:48.726 [2024-12-13 23:11:27.650342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:48.726 [2024-12-13 23:11:27.650350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:48.726 [2024-12-13 23:11:27.650355] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:48.726 [2024-12-13 23:11:27.650363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:48.726 [2024-12-13 23:11:27.650369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:48.726 [2024-12-13 23:11:27.650376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:48.726 [2024-12-13 23:11:27.650382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:48.726 [2024-12-13 23:11:27.650388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:48.726 [2024-12-13 23:11:27.650394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.726 [2024-12-13 23:11:27.650401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:48.726 [2024-12-13 23:11:27.650407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:30:48.726 [2024-12-13 23:11:27.650414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.726 [2024-12-13 23:11:27.650455] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:48.726 [2024-12-13 23:11:27.650468] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:52.026 [2024-12-13 23:11:31.102717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.026 [2024-12-13 23:11:31.102868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:52.026 [2024-12-13 23:11:31.102891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3452.245 ms 00:30:52.026 [2024-12-13 23:11:31.102904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.026 [2024-12-13 23:11:31.140275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.026 [2024-12-13 23:11:31.140358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:52.026 [2024-12-13 23:11:31.140376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.100 ms 00:30:52.026 [2024-12-13 23:11:31.140388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.026 [2024-12-13 23:11:31.140482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.026 [2024-12-13 23:11:31.140497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:52.026 [2024-12-13 23:11:31.140509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:52.026 [2024-12-13 23:11:31.140529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.180860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.180932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:52.287 [2024-12-13 23:11:31.180946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.277 ms 00:30:52.287 [2024-12-13 23:11:31.180957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.180997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.181013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:52.287 [2024-12-13 23:11:31.181023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:52.287 [2024-12-13 23:11:31.181035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.181752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.181820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:52.287 [2024-12-13 23:11:31.181842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.660 ms 00:30:52.287 [2024-12-13 23:11:31.181855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.181907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.181920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:52.287 [2024-12-13 23:11:31.181934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:52.287 [2024-12-13 23:11:31.181949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.202749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.202823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:52.287 [2024-12-13 23:11:31.202836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.778 ms 00:30:52.287 [2024-12-13 23:11:31.202846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.229102] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:52.287 [2024-12-13 23:11:31.230786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.230823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:52.287 [2024-12-13 23:11:31.230839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.842 ms 00:30:52.287 [2024-12-13 23:11:31.230848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.264436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.264495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:52.287 [2024-12-13 23:11:31.264513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.538 ms 00:30:52.287 [2024-12-13 23:11:31.264521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.264638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.264655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:52.287 [2024-12-13 23:11:31.264672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:30:52.287 [2024-12-13 23:11:31.264681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.289817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.289876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:52.287 [2024-12-13 23:11:31.289893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.079 ms 00:30:52.287 [2024-12-13 23:11:31.289903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.315097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.315150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:52.287 [2024-12-13 23:11:31.315167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.134 ms 00:30:52.287 [2024-12-13 23:11:31.315175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.315813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.315837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:52.287 [2024-12-13 23:11:31.315852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.587 ms 00:30:52.287 [2024-12-13 23:11:31.315865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.287 [2024-12-13 23:11:31.408903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.287 [2024-12-13 23:11:31.408961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:52.287 [2024-12-13 23:11:31.408984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 92.988 ms 00:30:52.287 [2024-12-13 23:11:31.408993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.549 [2024-12-13 23:11:31.437534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.549 [2024-12-13 23:11:31.437589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:52.549 [2024-12-13 23:11:31.437606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.451 ms 00:30:52.549 [2024-12-13 23:11:31.437615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.549 [2024-12-13 23:11:31.463640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.549 [2024-12-13 23:11:31.463695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:52.549 [2024-12-13 23:11:31.463710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.967 ms 00:30:52.549 [2024-12-13 23:11:31.463718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.549 [2024-12-13 23:11:31.490185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.549 [2024-12-13 23:11:31.490237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:52.549 [2024-12-13 23:11:31.490253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.393 ms 00:30:52.549 [2024-12-13 23:11:31.490261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.549 [2024-12-13 23:11:31.490322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.549 [2024-12-13 23:11:31.490332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:52.549 [2024-12-13 23:11:31.490349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:52.549 [2024-12-13 23:11:31.490356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.549 [2024-12-13 23:11:31.490474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:52.549 [2024-12-13 23:11:31.490491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:52.549 [2024-12-13 23:11:31.490503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:30:52.549 [2024-12-13 23:11:31.490511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:52.549 [2024-12-13 23:11:31.492448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3854.717 ms, result 0 00:30:52.549 { 00:30:52.549 "name": "ftl", 00:30:52.549 "uuid": "03ed4ae9-9145-4587-ac47-c5a16662e063" 00:30:52.549 } 00:30:52.549 23:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:52.812 [2024-12-13 23:11:31.702815] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.812 23:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:52.812 23:11:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:53.077 [2024-12-13 23:11:32.115169] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:53.077 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:53.338 [2024-12-13 23:11:32.331676] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:53.338 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:53.600 Fill FTL, iteration 1 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85545 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85545 /var/tmp/spdk.tgt.sock 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85545 ']' 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:53.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:53.600 23:11:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:53.861 [2024-12-13 23:11:32.775409] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:53.861 [2024-12-13 23:11:32.775574] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85545 ] 00:30:53.861 [2024-12-13 23:11:32.932068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.123 [2024-12-13 23:11:33.051284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.695 23:11:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.695 23:11:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:54.695 23:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:54.956 ftln1 00:30:54.956 23:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:54.956 23:11:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85545 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85545 ']' 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85545 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85545 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:55.217 killing process with pid 85545 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85545' 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85545 00:30:55.217 23:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85545 00:30:56.598 23:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:56.598 23:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:56.598 [2024-12-13 23:11:35.580006] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:56.598 [2024-12-13 23:11:35.580094] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85589 ] 00:30:56.598 [2024-12-13 23:11:35.729787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.856 [2024-12-13 23:11:35.804793] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.230  [2024-12-13T23:11:38.305Z] Copying: 261/1024 [MB] (261 MBps) [2024-12-13T23:11:39.239Z] Copying: 523/1024 [MB] (262 MBps) [2024-12-13T23:11:40.171Z] Copying: 767/1024 [MB] (244 MBps) [2024-12-13T23:11:40.171Z] Copying: 1019/1024 [MB] (252 MBps) [2024-12-13T23:11:40.780Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:31:01.640 00:31:01.640 23:11:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:01.640 23:11:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:01.640 Calculate MD5 checksum, iteration 1 00:31:01.640 23:11:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:01.640 23:11:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:01.640 23:11:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:01.640 23:11:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:01.640 23:11:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:01.640 23:11:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:01.640 [2024-12-13 23:11:40.775248] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:01.640 [2024-12-13 23:11:40.775471] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85648 ] 00:31:01.928 [2024-12-13 23:11:40.923440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.928 [2024-12-13 23:11:40.998237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.303  [2024-12-13T23:11:43.009Z] Copying: 648/1024 [MB] (648 MBps) [2024-12-13T23:11:43.578Z] Copying: 1024/1024 [MB] (average 633 MBps) 00:31:04.438 00:31:04.438 23:11:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:04.438 23:11:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:06.349 23:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:06.349 Fill FTL, iteration 2 00:31:06.349 23:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8c666fbeed0269f101acbb5e70599503 00:31:06.349 23:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:06.349 23:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:06.349 23:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:06.349 23:11:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:06.350 23:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:06.350 23:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:06.350 23:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:06.350 23:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:06.350 23:11:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:06.607 [2024-12-13 23:11:45.532147] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:06.607 [2024-12-13 23:11:45.532234] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85700 ] 00:31:06.607 [2024-12-13 23:11:45.681520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.866 [2024-12-13 23:11:45.757804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.240  [2024-12-13T23:11:48.316Z] Copying: 254/1024 [MB] (254 MBps) [2024-12-13T23:11:49.251Z] Copying: 515/1024 [MB] (261 MBps) [2024-12-13T23:11:50.185Z] Copying: 784/1024 [MB] (269 MBps) [2024-12-13T23:11:50.752Z] Copying: 1024/1024 [MB] (average 261 MBps) 00:31:11.612 00:31:11.612 Calculate MD5 checksum, iteration 2 00:31:11.612 23:11:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:11.612 23:11:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:11.612 23:11:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:11.612 23:11:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:11.612 23:11:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:11.612 23:11:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:11.612 23:11:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:11.612 23:11:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:11.612 [2024-12-13 23:11:50.613240] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:11.612 [2024-12-13 23:11:50.613358] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85753 ] 00:31:11.870 [2024-12-13 23:11:50.769260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.870 [2024-12-13 23:11:50.851656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.244  [2024-12-13T23:11:52.952Z] Copying: 618/1024 [MB] (618 MBps) [2024-12-13T23:11:53.892Z] Copying: 1024/1024 [MB] (average 628 MBps) 00:31:14.752 00:31:14.752 23:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:14.752 23:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:17.296 23:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:17.296 23:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b2b5bc5a420d0ec864f379e6b06fab78 00:31:17.296 23:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:17.296 23:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:17.296 23:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:17.296 [2024-12-13 23:11:56.057691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.296 [2024-12-13 23:11:56.057742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:17.296 [2024-12-13 23:11:56.057763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:17.296 [2024-12-13 23:11:56.057771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.296 [2024-12-13 23:11:56.057790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.296 [2024-12-13 23:11:56.057801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:17.296 [2024-12-13 23:11:56.057807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:17.296 [2024-12-13 23:11:56.057814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.296 [2024-12-13 23:11:56.057838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.296 [2024-12-13 23:11:56.057846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:17.296 [2024-12-13 23:11:56.057853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:17.296 [2024-12-13 23:11:56.057859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.296 [2024-12-13 23:11:56.057917] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.212 ms, result 0 00:31:17.296 true 00:31:17.296 23:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:17.296 { 00:31:17.296 "name": "ftl", 00:31:17.296 "properties": [ 00:31:17.296 { 00:31:17.296 "name": "superblock_version", 00:31:17.296 "value": 5, 00:31:17.296 "read-only": true 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "name": "base_device", 00:31:17.296 "bands": [ 00:31:17.296 { 00:31:17.296 "id": 0, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 1, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 2, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 3, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 4, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 5, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 6, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 7, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 8, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 9, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 10, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 11, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 12, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 13, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 14, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 15, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 16, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 17, 00:31:17.296 "state": "FREE", 00:31:17.296 "validity": 0.0 00:31:17.296 } 00:31:17.296 ], 00:31:17.296 "read-only": true 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "name": "cache_device", 00:31:17.296 "type": "bdev", 00:31:17.296 "chunks": [ 00:31:17.296 { 00:31:17.296 "id": 0, 00:31:17.296 "state": "INACTIVE", 00:31:17.296 "utilization": 0.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 1, 00:31:17.296 "state": "CLOSED", 00:31:17.296 "utilization": 1.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 2, 00:31:17.296 "state": "CLOSED", 00:31:17.296 "utilization": 1.0 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 3, 00:31:17.296 "state": "OPEN", 00:31:17.296 "utilization": 0.001953125 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "id": 4, 00:31:17.296 "state": "OPEN", 00:31:17.296 "utilization": 0.0 00:31:17.296 } 00:31:17.296 ], 00:31:17.296 "read-only": true 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "name": "verbose_mode", 00:31:17.296 "value": true, 00:31:17.296 "unit": "", 00:31:17.296 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:17.296 }, 00:31:17.296 { 00:31:17.296 "name": "prep_upgrade_on_shutdown", 00:31:17.296 "value": false, 00:31:17.296 "unit": "", 00:31:17.296 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:17.296 } 00:31:17.296 ] 00:31:17.296 } 00:31:17.296 23:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:17.557 [2024-12-13 23:11:56.469984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.557 [2024-12-13 23:11:56.470020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:17.557 [2024-12-13 23:11:56.470029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:17.557 [2024-12-13 23:11:56.470034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.557 [2024-12-13 23:11:56.470051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.557 [2024-12-13 23:11:56.470057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:17.557 [2024-12-13 23:11:56.470063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:17.557 [2024-12-13 23:11:56.470069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.557 [2024-12-13 23:11:56.470084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.557 [2024-12-13 23:11:56.470090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:17.557 [2024-12-13 23:11:56.470096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:17.557 [2024-12-13 23:11:56.470102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.557 [2024-12-13 23:11:56.470143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.153 ms, result 0 00:31:17.557 true 00:31:17.557 23:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:17.557 23:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:17.557 23:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:17.819 23:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:17.819 23:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:17.819 23:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:17.819 [2024-12-13 23:11:56.886301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.819 [2024-12-13 23:11:56.886328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:17.819 [2024-12-13 23:11:56.886336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:17.819 [2024-12-13 23:11:56.886341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.819 [2024-12-13 23:11:56.886356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.819 [2024-12-13 23:11:56.886362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:17.819 [2024-12-13 23:11:56.886367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:17.819 [2024-12-13 23:11:56.886372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.819 [2024-12-13 23:11:56.886387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.819 [2024-12-13 23:11:56.886393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:17.819 [2024-12-13 23:11:56.886398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:17.819 [2024-12-13 23:11:56.886403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.819 [2024-12-13 23:11:56.886441] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.128 ms, result 0 00:31:17.819 true 00:31:17.819 23:11:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:18.080 { 00:31:18.080 "name": "ftl", 00:31:18.080 "properties": [ 00:31:18.080 { 00:31:18.080 "name": "superblock_version", 00:31:18.080 "value": 5, 00:31:18.080 "read-only": true 00:31:18.080 }, 00:31:18.080 { 00:31:18.080 "name": "base_device", 00:31:18.080 "bands": [ 00:31:18.080 { 00:31:18.080 "id": 0, 00:31:18.080 "state": "FREE", 00:31:18.080 "validity": 0.0 00:31:18.080 }, 00:31:18.080 { 00:31:18.080 "id": 1, 00:31:18.080 "state": "FREE", 00:31:18.080 "validity": 0.0 00:31:18.080 }, 00:31:18.080 { 00:31:18.080 "id": 2, 00:31:18.080 "state": "FREE", 00:31:18.080 "validity": 0.0 00:31:18.080 }, 00:31:18.080 { 00:31:18.080 "id": 3, 00:31:18.080 "state": "FREE", 00:31:18.080 "validity": 0.0 00:31:18.080 }, 00:31:18.080 { 00:31:18.080 "id": 4, 00:31:18.080 "state": "FREE", 00:31:18.080 "validity": 0.0 00:31:18.080 }, 00:31:18.080 { 00:31:18.080 "id": 5, 00:31:18.080 "state": "FREE", 00:31:18.080 "validity": 0.0 00:31:18.080 }, 00:31:18.080 { 00:31:18.080 "id": 6, 00:31:18.080 "state": "FREE", 00:31:18.080 "validity": 0.0 00:31:18.080 }, 00:31:18.080 { 00:31:18.080 "id": 7, 00:31:18.080 "state": "FREE", 00:31:18.080 "validity": 0.0 00:31:18.080 }, 00:31:18.080 { 00:31:18.080 "id": 8, 00:31:18.080 "state": "FREE", 00:31:18.080 "validity": 0.0 00:31:18.080 }, 00:31:18.080 { 00:31:18.081 "id": 9, 00:31:18.081 "state": "FREE", 00:31:18.081 "validity": 0.0 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 10, 00:31:18.081 "state": "FREE", 00:31:18.081 "validity": 0.0 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 11, 00:31:18.081 "state": "FREE", 00:31:18.081 "validity": 0.0 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 12, 00:31:18.081 "state": "FREE", 00:31:18.081 "validity": 0.0 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 13, 00:31:18.081 "state": "FREE", 00:31:18.081 "validity": 0.0 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 14, 00:31:18.081 "state": "FREE", 00:31:18.081 "validity": 0.0 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 15, 00:31:18.081 "state": "FREE", 00:31:18.081 "validity": 0.0 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 16, 00:31:18.081 "state": "FREE", 00:31:18.081 "validity": 0.0 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 17, 00:31:18.081 "state": "FREE", 00:31:18.081 "validity": 0.0 00:31:18.081 } 00:31:18.081 ], 00:31:18.081 "read-only": true 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "name": "cache_device", 00:31:18.081 "type": "bdev", 00:31:18.081 "chunks": [ 00:31:18.081 { 00:31:18.081 "id": 0, 00:31:18.081 "state": "INACTIVE", 00:31:18.081 "utilization": 0.0 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 1, 00:31:18.081 "state": "CLOSED", 00:31:18.081 "utilization": 1.0 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 2, 00:31:18.081 "state": "CLOSED", 00:31:18.081 "utilization": 1.0 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 3, 00:31:18.081 "state": "OPEN", 00:31:18.081 "utilization": 0.001953125 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "id": 4, 00:31:18.081 "state": "OPEN", 00:31:18.081 "utilization": 0.0 00:31:18.081 } 00:31:18.081 ], 00:31:18.081 "read-only": true 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "name": "verbose_mode", 00:31:18.081 "value": true, 00:31:18.081 "unit": "", 00:31:18.081 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:18.081 }, 00:31:18.081 { 00:31:18.081 "name": "prep_upgrade_on_shutdown", 00:31:18.081 "value": true, 00:31:18.081 "unit": "", 00:31:18.081 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:18.081 } 00:31:18.081 ] 00:31:18.081 } 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85420 ]] 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85420 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85420 ']' 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85420 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85420 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:18.081 killing process with pid 85420 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85420' 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85420 00:31:18.081 23:11:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85420 00:31:18.653 [2024-12-13 23:11:57.692646] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:18.653 [2024-12-13 23:11:57.703082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.653 [2024-12-13 23:11:57.703115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:18.653 [2024-12-13 23:11:57.703126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:18.653 [2024-12-13 23:11:57.703133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:18.653 [2024-12-13 23:11:57.703151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:18.653 [2024-12-13 23:11:57.705377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:18.653 [2024-12-13 23:11:57.705403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:18.653 [2024-12-13 23:11:57.705411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.215 ms 00:31:18.653 [2024-12-13 23:11:57.705421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.794 [2024-12-13 23:12:05.679226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.794 [2024-12-13 23:12:05.679285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:26.795 [2024-12-13 23:12:05.679299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7973.756 ms 00:31:26.795 [2024-12-13 23:12:05.679311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.680741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.680776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:26.795 [2024-12-13 23:12:05.680785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.417 ms 00:31:26.795 [2024-12-13 23:12:05.680791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.681643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.681656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:26.795 [2024-12-13 23:12:05.681664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.831 ms 00:31:26.795 [2024-12-13 23:12:05.681670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.690160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.690190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:26.795 [2024-12-13 23:12:05.690198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.460 ms 00:31:26.795 [2024-12-13 23:12:05.690206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.696051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.696078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:26.795 [2024-12-13 23:12:05.696088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.818 ms 00:31:26.795 [2024-12-13 23:12:05.696095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.696158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.696167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:26.795 [2024-12-13 23:12:05.696179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:31:26.795 [2024-12-13 23:12:05.696186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.703964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.703990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:26.795 [2024-12-13 23:12:05.703998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.766 ms 00:31:26.795 [2024-12-13 23:12:05.704004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.711960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.711985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:26.795 [2024-12-13 23:12:05.711993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.931 ms 00:31:26.795 [2024-12-13 23:12:05.711999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.719772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.719795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:26.795 [2024-12-13 23:12:05.719802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.748 ms 00:31:26.795 [2024-12-13 23:12:05.719809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.727230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.727255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:26.795 [2024-12-13 23:12:05.727262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.371 ms 00:31:26.795 [2024-12-13 23:12:05.727268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.727291] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:26.795 [2024-12-13 23:12:05.727312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:26.795 [2024-12-13 23:12:05.727321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:26.795 [2024-12-13 23:12:05.727328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:26.795 [2024-12-13 23:12:05.727334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:26.795 [2024-12-13 23:12:05.727431] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:26.795 [2024-12-13 23:12:05.727438] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 03ed4ae9-9145-4587-ac47-c5a16662e063 00:31:26.795 [2024-12-13 23:12:05.727444] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:26.795 [2024-12-13 23:12:05.727450] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:26.795 [2024-12-13 23:12:05.727456] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:26.795 [2024-12-13 23:12:05.727471] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:26.795 [2024-12-13 23:12:05.727477] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:26.795 [2024-12-13 23:12:05.727485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:26.795 [2024-12-13 23:12:05.727491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:26.795 [2024-12-13 23:12:05.727497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:26.795 [2024-12-13 23:12:05.727502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:26.795 [2024-12-13 23:12:05.727508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.727518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:26.795 [2024-12-13 23:12:05.727524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.218 ms 00:31:26.795 [2024-12-13 23:12:05.727531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.737526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.737551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:26.795 [2024-12-13 23:12:05.737559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.974 ms 00:31:26.795 [2024-12-13 23:12:05.737570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.737873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:26.795 [2024-12-13 23:12:05.737886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:26.795 [2024-12-13 23:12:05.737894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:31:26.795 [2024-12-13 23:12:05.737900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.772709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.795 [2024-12-13 23:12:05.772734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:26.795 [2024-12-13 23:12:05.772745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.795 [2024-12-13 23:12:05.772752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.772784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.795 [2024-12-13 23:12:05.772791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:26.795 [2024-12-13 23:12:05.772798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.795 [2024-12-13 23:12:05.772804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.772867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.795 [2024-12-13 23:12:05.772876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:26.795 [2024-12-13 23:12:05.772883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.795 [2024-12-13 23:12:05.772889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.772904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.795 [2024-12-13 23:12:05.772910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:26.795 [2024-12-13 23:12:05.772916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.795 [2024-12-13 23:12:05.772922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.835889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.795 [2024-12-13 23:12:05.835918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:26.795 [2024-12-13 23:12:05.835927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.795 [2024-12-13 23:12:05.835937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.795 [2024-12-13 23:12:05.886903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.795 [2024-12-13 23:12:05.886933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:26.795 [2024-12-13 23:12:05.886943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.795 [2024-12-13 23:12:05.886950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.796 [2024-12-13 23:12:05.887019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.796 [2024-12-13 23:12:05.887026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:26.796 [2024-12-13 23:12:05.887034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.796 [2024-12-13 23:12:05.887040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.796 [2024-12-13 23:12:05.887090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.796 [2024-12-13 23:12:05.887099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:26.796 [2024-12-13 23:12:05.887106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.796 [2024-12-13 23:12:05.887112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.796 [2024-12-13 23:12:05.887189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.796 [2024-12-13 23:12:05.887197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:26.796 [2024-12-13 23:12:05.887204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.796 [2024-12-13 23:12:05.887210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.796 [2024-12-13 23:12:05.887235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.796 [2024-12-13 23:12:05.887245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:26.796 [2024-12-13 23:12:05.887253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.796 [2024-12-13 23:12:05.887260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.796 [2024-12-13 23:12:05.887297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.796 [2024-12-13 23:12:05.887304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:26.796 [2024-12-13 23:12:05.887311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.796 [2024-12-13 23:12:05.887317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.796 [2024-12-13 23:12:05.887359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:26.796 [2024-12-13 23:12:05.887368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:26.796 [2024-12-13 23:12:05.887375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:26.796 [2024-12-13 23:12:05.887381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:26.796 [2024-12-13 23:12:05.887501] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8184.355 ms, result 0 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85961 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85961 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85961 ']' 00:31:33.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:33.393 23:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:33.393 [2024-12-13 23:12:11.661565] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:33.393 [2024-12-13 23:12:11.661725] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85961 ] 00:31:33.393 [2024-12-13 23:12:11.829695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.393 [2024-12-13 23:12:11.950164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.656 [2024-12-13 23:12:12.744266] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:33.656 [2024-12-13 23:12:12.744346] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:33.919 [2024-12-13 23:12:12.897889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.897940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:33.919 [2024-12-13 23:12:12.897954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:33.919 [2024-12-13 23:12:12.897963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.898029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.898041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:33.919 [2024-12-13 23:12:12.898050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:31:33.919 [2024-12-13 23:12:12.898058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.898086] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:33.919 [2024-12-13 23:12:12.898867] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:33.919 [2024-12-13 23:12:12.898896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.898906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:33.919 [2024-12-13 23:12:12.898916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.820 ms 00:31:33.919 [2024-12-13 23:12:12.898924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.900667] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:33.919 [2024-12-13 23:12:12.914801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.914844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:33.919 [2024-12-13 23:12:12.914970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.137 ms 00:31:33.919 [2024-12-13 23:12:12.914978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.915057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.915068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:33.919 [2024-12-13 23:12:12.915078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:31:33.919 [2024-12-13 23:12:12.915086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.923282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.923319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:33.919 [2024-12-13 23:12:12.923329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.109 ms 00:31:33.919 [2024-12-13 23:12:12.923337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.923405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.923415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:33.919 [2024-12-13 23:12:12.923424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:31:33.919 [2024-12-13 23:12:12.923432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.923490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.923504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:33.919 [2024-12-13 23:12:12.923513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:33.919 [2024-12-13 23:12:12.923520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.923546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:33.919 [2024-12-13 23:12:12.927603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.927636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:33.919 [2024-12-13 23:12:12.927646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.062 ms 00:31:33.919 [2024-12-13 23:12:12.927657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.927690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.927699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:33.919 [2024-12-13 23:12:12.927708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:33.919 [2024-12-13 23:12:12.927716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.927794] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:33.919 [2024-12-13 23:12:12.927820] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:33.919 [2024-12-13 23:12:12.927861] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:33.919 [2024-12-13 23:12:12.927876] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:33.919 [2024-12-13 23:12:12.927983] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:33.919 [2024-12-13 23:12:12.927994] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:33.919 [2024-12-13 23:12:12.928006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:33.919 [2024-12-13 23:12:12.928017] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:33.919 [2024-12-13 23:12:12.928027] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:33.919 [2024-12-13 23:12:12.928038] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:33.919 [2024-12-13 23:12:12.928046] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:33.919 [2024-12-13 23:12:12.928054] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:33.919 [2024-12-13 23:12:12.928062] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:33.919 [2024-12-13 23:12:12.928070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.928078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:33.919 [2024-12-13 23:12:12.928086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:31:33.919 [2024-12-13 23:12:12.928093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.928182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.919 [2024-12-13 23:12:12.928191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:33.919 [2024-12-13 23:12:12.928201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:31:33.919 [2024-12-13 23:12:12.928209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.919 [2024-12-13 23:12:12.928310] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:33.919 [2024-12-13 23:12:12.928320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:33.919 [2024-12-13 23:12:12.928329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:33.919 [2024-12-13 23:12:12.928337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:33.919 [2024-12-13 23:12:12.928344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:33.919 [2024-12-13 23:12:12.928351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:33.919 [2024-12-13 23:12:12.928358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:33.919 [2024-12-13 23:12:12.928365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:33.919 [2024-12-13 23:12:12.928373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:33.919 [2024-12-13 23:12:12.928380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:33.919 [2024-12-13 23:12:12.928387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:33.919 [2024-12-13 23:12:12.928394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:33.919 [2024-12-13 23:12:12.928401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:33.919 [2024-12-13 23:12:12.928409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:33.919 [2024-12-13 23:12:12.928417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:33.919 [2024-12-13 23:12:12.928424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:33.919 [2024-12-13 23:12:12.928431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:33.919 [2024-12-13 23:12:12.928438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:33.919 [2024-12-13 23:12:12.928445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:33.919 [2024-12-13 23:12:12.928453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:33.919 [2024-12-13 23:12:12.928460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:33.919 [2024-12-13 23:12:12.928466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:33.919 [2024-12-13 23:12:12.928473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:33.919 [2024-12-13 23:12:12.928488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:33.919 [2024-12-13 23:12:12.928494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:33.919 [2024-12-13 23:12:12.928501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:33.920 [2024-12-13 23:12:12.928508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:33.920 [2024-12-13 23:12:12.928514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:33.920 [2024-12-13 23:12:12.928521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:33.920 [2024-12-13 23:12:12.928527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:33.920 [2024-12-13 23:12:12.928534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:33.920 [2024-12-13 23:12:12.928541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:33.920 [2024-12-13 23:12:12.928548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:33.920 [2024-12-13 23:12:12.928554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:33.920 [2024-12-13 23:12:12.928560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:33.920 [2024-12-13 23:12:12.928567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:33.920 [2024-12-13 23:12:12.928573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:33.920 [2024-12-13 23:12:12.928580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:33.920 [2024-12-13 23:12:12.928587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:33.920 [2024-12-13 23:12:12.928593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:33.920 [2024-12-13 23:12:12.928600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:33.920 [2024-12-13 23:12:12.928607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:33.920 [2024-12-13 23:12:12.928613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:33.920 [2024-12-13 23:12:12.928620] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:33.920 [2024-12-13 23:12:12.928628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:33.920 [2024-12-13 23:12:12.928637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:33.920 [2024-12-13 23:12:12.928645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:33.920 [2024-12-13 23:12:12.928656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:33.920 [2024-12-13 23:12:12.928663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:33.920 [2024-12-13 23:12:12.928670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:33.920 [2024-12-13 23:12:12.928677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:33.920 [2024-12-13 23:12:12.928684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:33.920 [2024-12-13 23:12:12.928691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:33.920 [2024-12-13 23:12:12.928700] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:33.920 [2024-12-13 23:12:12.928710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:33.920 [2024-12-13 23:12:12.928719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:33.920 [2024-12-13 23:12:12.928727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:33.920 [2024-12-13 23:12:12.928734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:33.920 [2024-12-13 23:12:12.928741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:33.920 [2024-12-13 23:12:12.928749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:33.920 [2024-12-13 23:12:12.928772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:33.920 [2024-12-13 23:12:12.928780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:33.920 [2024-12-13 23:12:12.928790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:33.920 [2024-12-13 23:12:12.928798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:33.920 [2024-12-13 23:12:12.928805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:33.920 [2024-12-13 23:12:12.928813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:33.920 [2024-12-13 23:12:12.928820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:33.920 [2024-12-13 23:12:12.928828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:33.920 [2024-12-13 23:12:12.928837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:33.920 [2024-12-13 23:12:12.928844] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:33.920 [2024-12-13 23:12:12.928852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:33.920 [2024-12-13 23:12:12.928860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:33.920 [2024-12-13 23:12:12.928869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:33.920 [2024-12-13 23:12:12.928877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:33.920 [2024-12-13 23:12:12.928886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:33.920 [2024-12-13 23:12:12.928894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.920 [2024-12-13 23:12:12.928902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:33.920 [2024-12-13 23:12:12.928911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.653 ms 00:31:33.920 [2024-12-13 23:12:12.928918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.920 [2024-12-13 23:12:12.928963] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:33.920 [2024-12-13 23:12:12.928974] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:38.192 [2024-12-13 23:12:17.025144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.025190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:38.193 [2024-12-13 23:12:17.025205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4096.166 ms 00:31:38.193 [2024-12-13 23:12:17.025214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.050374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.050409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:38.193 [2024-12-13 23:12:17.050421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.961 ms 00:31:38.193 [2024-12-13 23:12:17.050429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.050500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.050514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:38.193 [2024-12-13 23:12:17.050523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:38.193 [2024-12-13 23:12:17.050530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.080659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.080694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:38.193 [2024-12-13 23:12:17.080705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.093 ms 00:31:38.193 [2024-12-13 23:12:17.080715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.080742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.080750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:38.193 [2024-12-13 23:12:17.080771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:38.193 [2024-12-13 23:12:17.080779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.081134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.081149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:38.193 [2024-12-13 23:12:17.081158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:31:38.193 [2024-12-13 23:12:17.081165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.081206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.081220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:38.193 [2024-12-13 23:12:17.081228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:38.193 [2024-12-13 23:12:17.081236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.095180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.095208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:38.193 [2024-12-13 23:12:17.095218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.922 ms 00:31:38.193 [2024-12-13 23:12:17.095225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.115119] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:38.193 [2024-12-13 23:12:17.115156] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:38.193 [2024-12-13 23:12:17.115169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.115177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:38.193 [2024-12-13 23:12:17.115186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.838 ms 00:31:38.193 [2024-12-13 23:12:17.115193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.128864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.128896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:38.193 [2024-12-13 23:12:17.128907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.632 ms 00:31:38.193 [2024-12-13 23:12:17.128915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.140369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.140396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:38.193 [2024-12-13 23:12:17.140405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.413 ms 00:31:38.193 [2024-12-13 23:12:17.140412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.151961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.151987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:38.193 [2024-12-13 23:12:17.151997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.516 ms 00:31:38.193 [2024-12-13 23:12:17.152004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.152603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.152621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:38.193 [2024-12-13 23:12:17.152630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.516 ms 00:31:38.193 [2024-12-13 23:12:17.152638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.206776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.206823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:38.193 [2024-12-13 23:12:17.206835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.119 ms 00:31:38.193 [2024-12-13 23:12:17.206843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.217193] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:38.193 [2024-12-13 23:12:17.218018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.218039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:38.193 [2024-12-13 23:12:17.218049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.130 ms 00:31:38.193 [2024-12-13 23:12:17.218056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.218139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.218152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:38.193 [2024-12-13 23:12:17.218161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:38.193 [2024-12-13 23:12:17.218168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.218219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.218229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:38.193 [2024-12-13 23:12:17.218237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:38.193 [2024-12-13 23:12:17.218244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.218263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.218271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:38.193 [2024-12-13 23:12:17.218282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:38.193 [2024-12-13 23:12:17.218289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.218319] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:38.193 [2024-12-13 23:12:17.218329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.218336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:38.193 [2024-12-13 23:12:17.218344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:38.193 [2024-12-13 23:12:17.218351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.240978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.241077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:38.193 [2024-12-13 23:12:17.241135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.609 ms 00:31:38.193 [2024-12-13 23:12:17.241181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.241277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.193 [2024-12-13 23:12:17.241323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:38.193 [2024-12-13 23:12:17.241366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:38.193 [2024-12-13 23:12:17.241406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.193 [2024-12-13 23:12:17.242807] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4344.489 ms, result 0 00:31:38.193 [2024-12-13 23:12:17.257514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.193 [2024-12-13 23:12:17.273499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:38.193 [2024-12-13 23:12:17.281621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:38.765 23:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:38.765 23:12:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:38.765 23:12:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:38.765 23:12:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:38.765 23:12:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:38.765 [2024-12-13 23:12:17.789978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.765 [2024-12-13 23:12:17.790018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:38.765 [2024-12-13 23:12:17.790034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:38.765 [2024-12-13 23:12:17.790042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.765 [2024-12-13 23:12:17.790064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.765 [2024-12-13 23:12:17.790072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:38.765 [2024-12-13 23:12:17.790080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:38.765 [2024-12-13 23:12:17.790088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.765 [2024-12-13 23:12:17.790108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:38.765 [2024-12-13 23:12:17.790117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:38.765 [2024-12-13 23:12:17.790125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:38.765 [2024-12-13 23:12:17.790133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:38.765 [2024-12-13 23:12:17.790189] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.202 ms, result 0 00:31:38.765 true 00:31:38.765 23:12:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:39.026 { 00:31:39.026 "name": "ftl", 00:31:39.026 "properties": [ 00:31:39.026 { 00:31:39.026 "name": "superblock_version", 00:31:39.026 "value": 5, 00:31:39.026 "read-only": true 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "name": "base_device", 00:31:39.026 "bands": [ 00:31:39.026 { 00:31:39.026 "id": 0, 00:31:39.026 "state": "CLOSED", 00:31:39.026 "validity": 1.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 1, 00:31:39.026 "state": "CLOSED", 00:31:39.026 "validity": 1.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 2, 00:31:39.026 "state": "CLOSED", 00:31:39.026 "validity": 0.007843137254901933 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 3, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 4, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 5, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 6, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 7, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 8, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 9, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 10, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 11, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 12, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 13, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 14, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 15, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 16, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 17, 00:31:39.026 "state": "FREE", 00:31:39.026 "validity": 0.0 00:31:39.026 } 00:31:39.026 ], 00:31:39.026 "read-only": true 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "name": "cache_device", 00:31:39.026 "type": "bdev", 00:31:39.026 "chunks": [ 00:31:39.026 { 00:31:39.026 "id": 0, 00:31:39.026 "state": "INACTIVE", 00:31:39.026 "utilization": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 1, 00:31:39.026 "state": "OPEN", 00:31:39.026 "utilization": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 2, 00:31:39.026 "state": "OPEN", 00:31:39.026 "utilization": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 3, 00:31:39.026 "state": "FREE", 00:31:39.026 "utilization": 0.0 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "id": 4, 00:31:39.026 "state": "FREE", 00:31:39.026 "utilization": 0.0 00:31:39.026 } 00:31:39.026 ], 00:31:39.026 "read-only": true 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "name": "verbose_mode", 00:31:39.026 "value": true, 00:31:39.026 "unit": "", 00:31:39.026 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:39.026 }, 00:31:39.026 { 00:31:39.026 "name": "prep_upgrade_on_shutdown", 00:31:39.026 "value": false, 00:31:39.026 "unit": "", 00:31:39.026 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:39.026 } 00:31:39.026 ] 00:31:39.026 } 00:31:39.026 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:39.026 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:39.026 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:39.287 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:39.287 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:39.287 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:39.287 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:39.287 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:39.287 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:39.287 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:39.287 Validate MD5 checksum, iteration 1 00:31:39.287 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:39.288 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:39.288 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:39.288 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:39.288 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:39.288 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:39.288 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:39.288 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:39.288 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:39.288 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:39.288 23:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:39.548 [2024-12-13 23:12:18.474981] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:39.548 [2024-12-13 23:12:18.475070] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86054 ] 00:31:39.548 [2024-12-13 23:12:18.635374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.810 [2024-12-13 23:12:18.749345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.199  [2024-12-13T23:12:21.725Z] Copying: 504/1024 [MB] (504 MBps) [2024-12-13T23:12:22.668Z] Copying: 1024/1024 [MB] (average 517 MBps) 00:31:43.528 00:31:43.528 23:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:43.528 23:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:45.438 Validate MD5 checksum, iteration 2 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8c666fbeed0269f101acbb5e70599503 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8c666fbeed0269f101acbb5e70599503 != \8\c\6\6\6\f\b\e\e\d\0\2\6\9\f\1\0\1\a\c\b\b\5\e\7\0\5\9\9\5\0\3 ]] 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:45.438 23:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:45.697 [2024-12-13 23:12:24.581075] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:45.697 [2024-12-13 23:12:24.581178] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86123 ] 00:31:45.697 [2024-12-13 23:12:24.728804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.697 [2024-12-13 23:12:24.817657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.608  [2024-12-13T23:12:27.012Z] Copying: 667/1024 [MB] (667 MBps) [2024-12-13T23:12:31.209Z] Copying: 1024/1024 [MB] (average 632 MBps) 00:31:52.069 00:31:52.069 23:12:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:52.069 23:12:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b2b5bc5a420d0ec864f379e6b06fab78 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b2b5bc5a420d0ec864f379e6b06fab78 != \b\2\b\5\b\c\5\a\4\2\0\d\0\e\c\8\6\4\f\3\7\9\e\6\b\0\6\f\a\b\7\8 ]] 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85961 ]] 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85961 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:53.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86211 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86211 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 86211 ']' 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:53.971 23:12:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:53.971 [2024-12-13 23:12:33.022832] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:53.971 [2024-12-13 23:12:33.022945] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86211 ] 00:31:54.232 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 85961 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:54.232 [2024-12-13 23:12:33.182713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.232 [2024-12-13 23:12:33.297648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.176 [2024-12-13 23:12:34.190634] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:55.176 [2024-12-13 23:12:34.190738] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:55.439 [2024-12-13 23:12:34.346080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.346146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:55.439 [2024-12-13 23:12:34.346163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:55.439 [2024-12-13 23:12:34.346172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.346244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.346256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:55.439 [2024-12-13 23:12:34.346266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:31:55.439 [2024-12-13 23:12:34.346275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.346304] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:55.439 [2024-12-13 23:12:34.347120] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:55.439 [2024-12-13 23:12:34.347150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.347160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:55.439 [2024-12-13 23:12:34.347170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.857 ms 00:31:55.439 [2024-12-13 23:12:34.347179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.347990] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:55.439 [2024-12-13 23:12:34.368268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.368327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:55.439 [2024-12-13 23:12:34.368343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.282 ms 00:31:55.439 [2024-12-13 23:12:34.368352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.378308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.378362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:55.439 [2024-12-13 23:12:34.378375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:31:55.439 [2024-12-13 23:12:34.378383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.378742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.378783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:55.439 [2024-12-13 23:12:34.378796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:31:55.439 [2024-12-13 23:12:34.378805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.378873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.378883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:55.439 [2024-12-13 23:12:34.378892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:31:55.439 [2024-12-13 23:12:34.378901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.378927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.378937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:55.439 [2024-12-13 23:12:34.378946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:55.439 [2024-12-13 23:12:34.378954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.378977] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:55.439 [2024-12-13 23:12:34.382359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.382401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:55.439 [2024-12-13 23:12:34.382413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.388 ms 00:31:55.439 [2024-12-13 23:12:34.382422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.382466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.382476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:55.439 [2024-12-13 23:12:34.382485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:55.439 [2024-12-13 23:12:34.382493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.382531] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:55.439 [2024-12-13 23:12:34.382559] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:55.439 [2024-12-13 23:12:34.382600] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:55.439 [2024-12-13 23:12:34.382621] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:55.439 [2024-12-13 23:12:34.382734] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:55.439 [2024-12-13 23:12:34.382746] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:55.439 [2024-12-13 23:12:34.382773] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:55.439 [2024-12-13 23:12:34.382785] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:55.439 [2024-12-13 23:12:34.382794] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:55.439 [2024-12-13 23:12:34.382803] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:55.439 [2024-12-13 23:12:34.382812] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:55.439 [2024-12-13 23:12:34.382820] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:55.439 [2024-12-13 23:12:34.382828] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:55.439 [2024-12-13 23:12:34.382836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.382848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:55.439 [2024-12-13 23:12:34.382856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:31:55.439 [2024-12-13 23:12:34.382864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.382951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.439 [2024-12-13 23:12:34.382958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:55.439 [2024-12-13 23:12:34.382966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:31:55.439 [2024-12-13 23:12:34.382973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.439 [2024-12-13 23:12:34.383077] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:55.439 [2024-12-13 23:12:34.383096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:55.439 [2024-12-13 23:12:34.383110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:55.439 [2024-12-13 23:12:34.383119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:55.439 [2024-12-13 23:12:34.383129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:55.439 [2024-12-13 23:12:34.383136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:55.440 [2024-12-13 23:12:34.383143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:55.440 [2024-12-13 23:12:34.383150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:55.440 [2024-12-13 23:12:34.383157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:55.440 [2024-12-13 23:12:34.383163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:55.440 [2024-12-13 23:12:34.383174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:55.440 [2024-12-13 23:12:34.383183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:55.440 [2024-12-13 23:12:34.383190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:55.440 [2024-12-13 23:12:34.383197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:55.440 [2024-12-13 23:12:34.383205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:55.440 [2024-12-13 23:12:34.383211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:55.440 [2024-12-13 23:12:34.383221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:55.440 [2024-12-13 23:12:34.383228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:55.440 [2024-12-13 23:12:34.383235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:55.440 [2024-12-13 23:12:34.383243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:55.440 [2024-12-13 23:12:34.383250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:55.440 [2024-12-13 23:12:34.383266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:55.440 [2024-12-13 23:12:34.383273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:55.440 [2024-12-13 23:12:34.383280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:55.440 [2024-12-13 23:12:34.383286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:55.440 [2024-12-13 23:12:34.383293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:55.440 [2024-12-13 23:12:34.383300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:55.440 [2024-12-13 23:12:34.383307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:55.440 [2024-12-13 23:12:34.383314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:55.440 [2024-12-13 23:12:34.383322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:55.440 [2024-12-13 23:12:34.383329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:55.440 [2024-12-13 23:12:34.383337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:55.440 [2024-12-13 23:12:34.383343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:55.440 [2024-12-13 23:12:34.383350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:55.440 [2024-12-13 23:12:34.383357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:55.440 [2024-12-13 23:12:34.383364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:55.440 [2024-12-13 23:12:34.383370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:55.440 [2024-12-13 23:12:34.383378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:55.440 [2024-12-13 23:12:34.383385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:55.440 [2024-12-13 23:12:34.383395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:55.440 [2024-12-13 23:12:34.383402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:55.440 [2024-12-13 23:12:34.383409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:55.440 [2024-12-13 23:12:34.383419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:55.440 [2024-12-13 23:12:34.383427] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:55.440 [2024-12-13 23:12:34.383436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:55.440 [2024-12-13 23:12:34.383445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:55.440 [2024-12-13 23:12:34.383479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:55.440 [2024-12-13 23:12:34.383489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:55.440 [2024-12-13 23:12:34.383497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:55.440 [2024-12-13 23:12:34.383505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:55.440 [2024-12-13 23:12:34.383513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:55.440 [2024-12-13 23:12:34.383520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:55.440 [2024-12-13 23:12:34.383528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:55.440 [2024-12-13 23:12:34.383537] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:55.440 [2024-12-13 23:12:34.383547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:55.440 [2024-12-13 23:12:34.383556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:55.440 [2024-12-13 23:12:34.383563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:55.440 [2024-12-13 23:12:34.383571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:55.440 [2024-12-13 23:12:34.383578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:55.440 [2024-12-13 23:12:34.383586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:55.440 [2024-12-13 23:12:34.383593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:55.440 [2024-12-13 23:12:34.383601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:55.440 [2024-12-13 23:12:34.383607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:55.440 [2024-12-13 23:12:34.383615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:55.440 [2024-12-13 23:12:34.383622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:55.440 [2024-12-13 23:12:34.383630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:55.440 [2024-12-13 23:12:34.383637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:55.440 [2024-12-13 23:12:34.383644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:55.440 [2024-12-13 23:12:34.383652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:55.440 [2024-12-13 23:12:34.383659] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:55.440 [2024-12-13 23:12:34.383667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:55.440 [2024-12-13 23:12:34.383680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:55.440 [2024-12-13 23:12:34.383687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:55.440 [2024-12-13 23:12:34.383695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:55.440 [2024-12-13 23:12:34.383705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:55.440 [2024-12-13 23:12:34.383717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.440 [2024-12-13 23:12:34.383726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:55.440 [2024-12-13 23:12:34.383734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.710 ms 00:31:55.440 [2024-12-13 23:12:34.383743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.440 [2024-12-13 23:12:34.417062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.440 [2024-12-13 23:12:34.417113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:55.440 [2024-12-13 23:12:34.417125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.248 ms 00:31:55.440 [2024-12-13 23:12:34.417135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.440 [2024-12-13 23:12:34.417184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.440 [2024-12-13 23:12:34.417195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:55.440 [2024-12-13 23:12:34.417204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:55.440 [2024-12-13 23:12:34.417212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.440 [2024-12-13 23:12:34.456899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.440 [2024-12-13 23:12:34.456948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:55.440 [2024-12-13 23:12:34.456960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.623 ms 00:31:55.440 [2024-12-13 23:12:34.456969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.440 [2024-12-13 23:12:34.457015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.440 [2024-12-13 23:12:34.457025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:55.440 [2024-12-13 23:12:34.457034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:55.440 [2024-12-13 23:12:34.457047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.440 [2024-12-13 23:12:34.457184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.440 [2024-12-13 23:12:34.457196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:55.440 [2024-12-13 23:12:34.457206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:31:55.440 [2024-12-13 23:12:34.457216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.440 [2024-12-13 23:12:34.457269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.440 [2024-12-13 23:12:34.457279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:55.440 [2024-12-13 23:12:34.457288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:55.440 [2024-12-13 23:12:34.457298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.440 [2024-12-13 23:12:34.478001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.440 [2024-12-13 23:12:34.478050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:55.440 [2024-12-13 23:12:34.478062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.673 ms 00:31:55.440 [2024-12-13 23:12:34.478075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.440 [2024-12-13 23:12:34.478201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.440 [2024-12-13 23:12:34.478214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:55.440 [2024-12-13 23:12:34.478223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:55.441 [2024-12-13 23:12:34.478232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.441 [2024-12-13 23:12:34.511521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.441 [2024-12-13 23:12:34.511584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:55.441 [2024-12-13 23:12:34.511600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.266 ms 00:31:55.441 [2024-12-13 23:12:34.511611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.441 [2024-12-13 23:12:34.521924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.441 [2024-12-13 23:12:34.521973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:55.441 [2024-12-13 23:12:34.521995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.547 ms 00:31:55.441 [2024-12-13 23:12:34.522004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.702 [2024-12-13 23:12:34.594220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.702 [2024-12-13 23:12:34.594289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:55.702 [2024-12-13 23:12:34.594303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.143 ms 00:31:55.702 [2024-12-13 23:12:34.594312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.702 [2024-12-13 23:12:34.594524] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:55.702 [2024-12-13 23:12:34.594689] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:55.702 [2024-12-13 23:12:34.594862] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:55.702 [2024-12-13 23:12:34.595026] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:55.702 [2024-12-13 23:12:34.595037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.702 [2024-12-13 23:12:34.595046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:55.702 [2024-12-13 23:12:34.595057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.667 ms 00:31:55.702 [2024-12-13 23:12:34.595066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.702 [2024-12-13 23:12:34.595159] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:55.702 [2024-12-13 23:12:34.595174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.702 [2024-12-13 23:12:34.595186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:55.702 [2024-12-13 23:12:34.595196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:55.702 [2024-12-13 23:12:34.595204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.702 [2024-12-13 23:12:34.613214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.702 [2024-12-13 23:12:34.613276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:55.702 [2024-12-13 23:12:34.613289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.984 ms 00:31:55.702 [2024-12-13 23:12:34.613298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.702 [2024-12-13 23:12:34.622295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.702 [2024-12-13 23:12:34.622344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:55.702 [2024-12-13 23:12:34.622356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:55.702 [2024-12-13 23:12:34.622366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:55.702 [2024-12-13 23:12:34.622470] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:55.702 [2024-12-13 23:12:34.622747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:55.702 [2024-12-13 23:12:34.622798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:55.702 [2024-12-13 23:12:34.622810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:31:55.702 [2024-12-13 23:12:34.622819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.646 [2024-12-13 23:12:35.562554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.646 [2024-12-13 23:12:35.562632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:56.646 [2024-12-13 23:12:35.562649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 938.757 ms 00:31:56.646 [2024-12-13 23:12:35.562657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.646 [2024-12-13 23:12:35.567153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.646 [2024-12-13 23:12:35.567200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:56.646 [2024-12-13 23:12:35.567210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.783 ms 00:31:56.646 [2024-12-13 23:12:35.567219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.646 [2024-12-13 23:12:35.568285] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:56.646 [2024-12-13 23:12:35.568334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.646 [2024-12-13 23:12:35.568343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:56.646 [2024-12-13 23:12:35.568353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.077 ms 00:31:56.646 [2024-12-13 23:12:35.568360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.646 [2024-12-13 23:12:35.568396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.646 [2024-12-13 23:12:35.568405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:56.646 [2024-12-13 23:12:35.568414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:56.646 [2024-12-13 23:12:35.568428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.646 [2024-12-13 23:12:35.568458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 945.990 ms, result 0 00:31:56.646 [2024-12-13 23:12:35.568495] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:56.646 [2024-12-13 23:12:35.568799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.646 [2024-12-13 23:12:35.568827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:56.646 [2024-12-13 23:12:35.568836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:31:56.646 [2024-12-13 23:12:35.568843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.383247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.383285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:57.592 [2024-12-13 23:12:36.383303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 813.373 ms 00:31:57.592 [2024-12-13 23:12:36.383309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.386645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.386672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:57.592 [2024-12-13 23:12:36.386679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.176 ms 00:31:57.592 [2024-12-13 23:12:36.386685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.387403] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:57.592 [2024-12-13 23:12:36.387431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.387437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:57.592 [2024-12-13 23:12:36.387444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.725 ms 00:31:57.592 [2024-12-13 23:12:36.387450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.387487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.387494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:57.592 [2024-12-13 23:12:36.387501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:57.592 [2024-12-13 23:12:36.387506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.387533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 819.034 ms, result 0 00:31:57.592 [2024-12-13 23:12:36.387567] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:57.592 [2024-12-13 23:12:36.387576] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:57.592 [2024-12-13 23:12:36.387584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.387591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:57.592 [2024-12-13 23:12:36.387598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1765.138 ms 00:31:57.592 [2024-12-13 23:12:36.387604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.387627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.387636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:57.592 [2024-12-13 23:12:36.387643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:57.592 [2024-12-13 23:12:36.387649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.396778] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:57.592 [2024-12-13 23:12:36.396863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.396876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:57.592 [2024-12-13 23:12:36.396885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.201 ms 00:31:57.592 [2024-12-13 23:12:36.396891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.397435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.397454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:57.592 [2024-12-13 23:12:36.397464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.484 ms 00:31:57.592 [2024-12-13 23:12:36.397470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.399150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.399168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:57.592 [2024-12-13 23:12:36.399175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.667 ms 00:31:57.592 [2024-12-13 23:12:36.399182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.399211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.399219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:57.592 [2024-12-13 23:12:36.399225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:57.592 [2024-12-13 23:12:36.399233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.399311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.399319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:57.592 [2024-12-13 23:12:36.399325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:57.592 [2024-12-13 23:12:36.399331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.399347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.399353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:57.592 [2024-12-13 23:12:36.399359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:57.592 [2024-12-13 23:12:36.399365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.399395] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:57.592 [2024-12-13 23:12:36.399403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.399410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:57.592 [2024-12-13 23:12:36.399415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:57.592 [2024-12-13 23:12:36.399421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.399476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.592 [2024-12-13 23:12:36.399483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:57.592 [2024-12-13 23:12:36.399489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:31:57.592 [2024-12-13 23:12:36.399496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.592 [2024-12-13 23:12:36.400412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2053.962 ms, result 0 00:31:57.592 [2024-12-13 23:12:36.413120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:57.593 [2024-12-13 23:12:36.429104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:57.593 [2024-12-13 23:12:36.437246] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:57.593 Validate MD5 checksum, iteration 1 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:57.593 23:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:57.593 [2024-12-13 23:12:36.542525] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:57.593 [2024-12-13 23:12:36.542635] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86256 ] 00:31:57.593 [2024-12-13 23:12:36.702466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.854 [2024-12-13 23:12:36.813307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.242  [2024-12-13T23:12:39.321Z] Copying: 588/1024 [MB] (588 MBps) [2024-12-13T23:12:40.265Z] Copying: 1024/1024 [MB] (average 598 MBps) 00:32:01.125 00:32:01.125 23:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:01.125 23:12:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:03.672 Validate MD5 checksum, iteration 2 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8c666fbeed0269f101acbb5e70599503 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8c666fbeed0269f101acbb5e70599503 != \8\c\6\6\6\f\b\e\e\d\0\2\6\9\f\1\0\1\a\c\b\b\5\e\7\0\5\9\9\5\0\3 ]] 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:03.672 23:12:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:03.672 [2024-12-13 23:12:42.405873] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:03.672 [2024-12-13 23:12:42.406727] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86317 ] 00:32:03.672 [2024-12-13 23:12:42.566829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.672 [2024-12-13 23:12:42.665506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.059  [2024-12-13T23:12:44.775Z] Copying: 694/1024 [MB] (694 MBps) [2024-12-13T23:12:45.718Z] Copying: 1024/1024 [MB] (average 670 MBps) 00:32:06.578 00:32:06.578 23:12:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:06.578 23:12:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:08.490 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:08.490 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b2b5bc5a420d0ec864f379e6b06fab78 00:32:08.490 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b2b5bc5a420d0ec864f379e6b06fab78 != \b\2\b\5\b\c\5\a\4\2\0\d\0\e\c\8\6\4\f\3\7\9\e\6\b\0\6\f\a\b\7\8 ]] 00:32:08.490 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:08.490 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:08.490 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:08.490 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86211 ]] 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86211 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 86211 ']' 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 86211 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86211 00:32:08.491 killing process with pid 86211 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86211' 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 86211 00:32:08.491 23:12:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 86211 00:32:09.063 [2024-12-13 23:12:48.028643] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:09.063 [2024-12-13 23:12:48.041099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.041133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:09.063 [2024-12-13 23:12:48.041145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:09.063 [2024-12-13 23:12:48.041152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.041172] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:09.063 [2024-12-13 23:12:48.043375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.043400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:09.063 [2024-12-13 23:12:48.043412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.192 ms 00:32:09.063 [2024-12-13 23:12:48.043419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.043642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.043652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:09.063 [2024-12-13 23:12:48.043660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:32:09.063 [2024-12-13 23:12:48.043666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.044827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.044851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:09.063 [2024-12-13 23:12:48.044859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.147 ms 00:32:09.063 [2024-12-13 23:12:48.044869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.045721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.045741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:09.063 [2024-12-13 23:12:48.045749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.828 ms 00:32:09.063 [2024-12-13 23:12:48.045766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.053790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.053816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:09.063 [2024-12-13 23:12:48.053825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.995 ms 00:32:09.063 [2024-12-13 23:12:48.053835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.058253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.058279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:09.063 [2024-12-13 23:12:48.058288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.391 ms 00:32:09.063 [2024-12-13 23:12:48.058295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.058358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.058365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:09.063 [2024-12-13 23:12:48.058373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:32:09.063 [2024-12-13 23:12:48.058383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.066374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.066399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:09.063 [2024-12-13 23:12:48.066406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.978 ms 00:32:09.063 [2024-12-13 23:12:48.066412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.074226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.074251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:09.063 [2024-12-13 23:12:48.074259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.788 ms 00:32:09.063 [2024-12-13 23:12:48.074264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.081725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.081749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:09.063 [2024-12-13 23:12:48.081765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.435 ms 00:32:09.063 [2024-12-13 23:12:48.081771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.088954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.063 [2024-12-13 23:12:48.088978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:09.063 [2024-12-13 23:12:48.088986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.137 ms 00:32:09.063 [2024-12-13 23:12:48.088992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.063 [2024-12-13 23:12:48.089017] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:09.063 [2024-12-13 23:12:48.089029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:09.063 [2024-12-13 23:12:48.089037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:09.063 [2024-12-13 23:12:48.089043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:09.063 [2024-12-13 23:12:48.089050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:09.063 [2024-12-13 23:12:48.089056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:09.063 [2024-12-13 23:12:48.089062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:09.063 [2024-12-13 23:12:48.089068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:09.063 [2024-12-13 23:12:48.089074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:09.064 [2024-12-13 23:12:48.089081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:09.064 [2024-12-13 23:12:48.089087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:09.064 [2024-12-13 23:12:48.089093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:09.064 [2024-12-13 23:12:48.089098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:09.064 [2024-12-13 23:12:48.089104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:09.064 [2024-12-13 23:12:48.089110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:09.064 [2024-12-13 23:12:48.089116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:09.064 [2024-12-13 23:12:48.089121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:09.064 [2024-12-13 23:12:48.089128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:09.064 [2024-12-13 23:12:48.089133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:09.064 [2024-12-13 23:12:48.089140] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:09.064 [2024-12-13 23:12:48.089147] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 03ed4ae9-9145-4587-ac47-c5a16662e063 00:32:09.064 [2024-12-13 23:12:48.089153] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:09.064 [2024-12-13 23:12:48.089159] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:09.064 [2024-12-13 23:12:48.089165] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:09.064 [2024-12-13 23:12:48.089171] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:09.064 [2024-12-13 23:12:48.089177] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:09.064 [2024-12-13 23:12:48.089184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:09.064 [2024-12-13 23:12:48.089189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:09.064 [2024-12-13 23:12:48.089194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:09.064 [2024-12-13 23:12:48.089200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:09.064 [2024-12-13 23:12:48.089207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.064 [2024-12-13 23:12:48.089216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:09.064 [2024-12-13 23:12:48.089223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:32:09.064 [2024-12-13 23:12:48.089228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.064 [2024-12-13 23:12:48.099438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.064 [2024-12-13 23:12:48.099464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:09.064 [2024-12-13 23:12:48.099473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.196 ms 00:32:09.064 [2024-12-13 23:12:48.099480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.064 [2024-12-13 23:12:48.099795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:09.064 [2024-12-13 23:12:48.099805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:09.064 [2024-12-13 23:12:48.099812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.296 ms 00:32:09.064 [2024-12-13 23:12:48.099818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.064 [2024-12-13 23:12:48.134691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.064 [2024-12-13 23:12:48.134716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:09.064 [2024-12-13 23:12:48.134725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.064 [2024-12-13 23:12:48.134732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.064 [2024-12-13 23:12:48.134766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.064 [2024-12-13 23:12:48.134773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:09.064 [2024-12-13 23:12:48.134780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.064 [2024-12-13 23:12:48.134786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.064 [2024-12-13 23:12:48.134857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.064 [2024-12-13 23:12:48.134866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:09.064 [2024-12-13 23:12:48.134873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.064 [2024-12-13 23:12:48.134880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.064 [2024-12-13 23:12:48.134897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.064 [2024-12-13 23:12:48.134904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:09.064 [2024-12-13 23:12:48.134910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.064 [2024-12-13 23:12:48.134916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.064 [2024-12-13 23:12:48.196578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.064 [2024-12-13 23:12:48.196608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:09.064 [2024-12-13 23:12:48.196617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.064 [2024-12-13 23:12:48.196624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.325 [2024-12-13 23:12:48.248049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.325 [2024-12-13 23:12:48.248083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:09.325 [2024-12-13 23:12:48.248092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.325 [2024-12-13 23:12:48.248098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.325 [2024-12-13 23:12:48.248162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.325 [2024-12-13 23:12:48.248169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:09.325 [2024-12-13 23:12:48.248176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.325 [2024-12-13 23:12:48.248183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.325 [2024-12-13 23:12:48.248232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.325 [2024-12-13 23:12:48.248251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:09.325 [2024-12-13 23:12:48.248259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.325 [2024-12-13 23:12:48.248265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.325 [2024-12-13 23:12:48.248342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.325 [2024-12-13 23:12:48.248350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:09.325 [2024-12-13 23:12:48.248356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.325 [2024-12-13 23:12:48.248362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.325 [2024-12-13 23:12:48.248388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.325 [2024-12-13 23:12:48.248396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:09.325 [2024-12-13 23:12:48.248404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.325 [2024-12-13 23:12:48.248410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.325 [2024-12-13 23:12:48.248444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.325 [2024-12-13 23:12:48.248452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:09.325 [2024-12-13 23:12:48.248458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.325 [2024-12-13 23:12:48.248464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.325 [2024-12-13 23:12:48.248502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:09.325 [2024-12-13 23:12:48.248513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:09.325 [2024-12-13 23:12:48.248520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:09.325 [2024-12-13 23:12:48.248525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:09.325 [2024-12-13 23:12:48.248631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 207.505 ms, result 0 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:10.267 Remove shared memory files 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85961 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:10.267 00:32:10.267 real 1m24.966s 00:32:10.267 user 1m54.401s 00:32:10.267 sys 0m20.076s 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:10.267 ************************************ 00:32:10.267 END TEST ftl_upgrade_shutdown 00:32:10.267 ************************************ 00:32:10.267 23:12:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:10.267 23:12:49 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:32:10.267 23:12:49 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:10.267 23:12:49 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:32:10.267 23:12:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:10.267 23:12:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:10.267 ************************************ 00:32:10.267 START TEST ftl_restore_fast 00:32:10.267 ************************************ 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:32:10.267 * Looking for test storage... 00:32:10.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:32:10.267 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:10.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.268 --rc genhtml_branch_coverage=1 00:32:10.268 --rc genhtml_function_coverage=1 00:32:10.268 --rc genhtml_legend=1 00:32:10.268 --rc geninfo_all_blocks=1 00:32:10.268 --rc geninfo_unexecuted_blocks=1 00:32:10.268 00:32:10.268 ' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:10.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.268 --rc genhtml_branch_coverage=1 00:32:10.268 --rc genhtml_function_coverage=1 00:32:10.268 --rc genhtml_legend=1 00:32:10.268 --rc geninfo_all_blocks=1 00:32:10.268 --rc geninfo_unexecuted_blocks=1 00:32:10.268 00:32:10.268 ' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:10.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.268 --rc genhtml_branch_coverage=1 00:32:10.268 --rc genhtml_function_coverage=1 00:32:10.268 --rc genhtml_legend=1 00:32:10.268 --rc geninfo_all_blocks=1 00:32:10.268 --rc geninfo_unexecuted_blocks=1 00:32:10.268 00:32:10.268 ' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:10.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.268 --rc genhtml_branch_coverage=1 00:32:10.268 --rc genhtml_function_coverage=1 00:32:10.268 --rc genhtml_legend=1 00:32:10.268 --rc geninfo_all_blocks=1 00:32:10.268 --rc geninfo_unexecuted_blocks=1 00:32:10.268 00:32:10.268 ' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.WvHPqvxCqy 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:32:10.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=86467 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 86467 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 86467 ']' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:10.268 23:12:49 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:10.529 [2024-12-13 23:12:49.484810] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:10.529 [2024-12-13 23:12:49.485280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86467 ] 00:32:10.529 [2024-12-13 23:12:49.646511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.790 [2024-12-13 23:12:49.748513] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.360 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:11.360 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:32:11.360 23:12:50 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:11.360 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:32:11.360 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:11.360 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:32:11.360 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:32:11.360 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:11.622 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:11.622 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:32:11.622 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:11.622 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:32:11.622 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:11.622 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:11.622 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:11.622 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:11.883 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:11.883 { 00:32:11.883 "name": "nvme0n1", 00:32:11.883 "aliases": [ 00:32:11.883 "4891aff7-3faa-4359-8906-ada5a3733284" 00:32:11.883 ], 00:32:11.883 "product_name": "NVMe disk", 00:32:11.883 "block_size": 4096, 00:32:11.883 "num_blocks": 1310720, 00:32:11.883 "uuid": "4891aff7-3faa-4359-8906-ada5a3733284", 00:32:11.883 "numa_id": -1, 00:32:11.883 "assigned_rate_limits": { 00:32:11.883 "rw_ios_per_sec": 0, 00:32:11.883 "rw_mbytes_per_sec": 0, 00:32:11.883 "r_mbytes_per_sec": 0, 00:32:11.883 "w_mbytes_per_sec": 0 00:32:11.883 }, 00:32:11.883 "claimed": true, 00:32:11.883 "claim_type": "read_many_write_one", 00:32:11.883 "zoned": false, 00:32:11.883 "supported_io_types": { 00:32:11.883 "read": true, 00:32:11.883 "write": true, 00:32:11.883 "unmap": true, 00:32:11.883 "flush": true, 00:32:11.883 "reset": true, 00:32:11.883 "nvme_admin": true, 00:32:11.883 "nvme_io": true, 00:32:11.883 "nvme_io_md": false, 00:32:11.883 "write_zeroes": true, 00:32:11.883 "zcopy": false, 00:32:11.883 "get_zone_info": false, 00:32:11.883 "zone_management": false, 00:32:11.883 "zone_append": false, 00:32:11.883 "compare": true, 00:32:11.883 "compare_and_write": false, 00:32:11.883 "abort": true, 00:32:11.883 "seek_hole": false, 00:32:11.883 "seek_data": false, 00:32:11.883 "copy": true, 00:32:11.883 "nvme_iov_md": false 00:32:11.883 }, 00:32:11.883 "driver_specific": { 00:32:11.883 "nvme": [ 00:32:11.883 { 00:32:11.883 "pci_address": "0000:00:11.0", 00:32:11.883 "trid": { 00:32:11.883 "trtype": "PCIe", 00:32:11.883 "traddr": "0000:00:11.0" 00:32:11.883 }, 00:32:11.883 "ctrlr_data": { 00:32:11.883 "cntlid": 0, 00:32:11.883 "vendor_id": "0x1b36", 00:32:11.883 "model_number": "QEMU NVMe Ctrl", 00:32:11.883 "serial_number": "12341", 00:32:11.883 "firmware_revision": "8.0.0", 00:32:11.883 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:11.883 "oacs": { 00:32:11.883 "security": 0, 00:32:11.883 "format": 1, 00:32:11.883 "firmware": 0, 00:32:11.883 "ns_manage": 1 00:32:11.883 }, 00:32:11.883 "multi_ctrlr": false, 00:32:11.884 "ana_reporting": false 00:32:11.884 }, 00:32:11.884 "vs": { 00:32:11.884 "nvme_version": "1.4" 00:32:11.884 }, 00:32:11.884 "ns_data": { 00:32:11.884 "id": 1, 00:32:11.884 "can_share": false 00:32:11.884 } 00:32:11.884 } 00:32:11.884 ], 00:32:11.884 "mp_policy": "active_passive" 00:32:11.884 } 00:32:11.884 } 00:32:11.884 ]' 00:32:11.884 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:11.884 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:11.884 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:11.884 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:11.884 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:11.884 23:12:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:32:11.884 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:32:11.884 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:11.884 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:32:11.884 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:11.884 23:12:50 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:12.143 23:12:51 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=45c7ffc1-28d5-4d73-9efd-9a170f29db95 00:32:12.143 23:12:51 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:32:12.143 23:12:51 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 45c7ffc1-28d5-4d73-9efd-9a170f29db95 00:32:12.404 23:12:51 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:12.666 23:12:51 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=df3ada73-4ded-4d8d-9b20-4b7327e023b0 00:32:12.666 23:12:51 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u df3ada73-4ded-4d8d-9b20-4b7327e023b0 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:12.925 23:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:13.184 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:13.184 { 00:32:13.184 "name": "09e9415f-6f7b-426b-8fe8-1f97c88b37ec", 00:32:13.184 "aliases": [ 00:32:13.184 "lvs/nvme0n1p0" 00:32:13.184 ], 00:32:13.184 "product_name": "Logical Volume", 00:32:13.184 "block_size": 4096, 00:32:13.184 "num_blocks": 26476544, 00:32:13.184 "uuid": "09e9415f-6f7b-426b-8fe8-1f97c88b37ec", 00:32:13.184 "assigned_rate_limits": { 00:32:13.184 "rw_ios_per_sec": 0, 00:32:13.184 "rw_mbytes_per_sec": 0, 00:32:13.184 "r_mbytes_per_sec": 0, 00:32:13.184 "w_mbytes_per_sec": 0 00:32:13.184 }, 00:32:13.184 "claimed": false, 00:32:13.184 "zoned": false, 00:32:13.184 "supported_io_types": { 00:32:13.184 "read": true, 00:32:13.184 "write": true, 00:32:13.184 "unmap": true, 00:32:13.184 "flush": false, 00:32:13.184 "reset": true, 00:32:13.184 "nvme_admin": false, 00:32:13.184 "nvme_io": false, 00:32:13.184 "nvme_io_md": false, 00:32:13.184 "write_zeroes": true, 00:32:13.184 "zcopy": false, 00:32:13.184 "get_zone_info": false, 00:32:13.184 "zone_management": false, 00:32:13.184 "zone_append": false, 00:32:13.184 "compare": false, 00:32:13.184 "compare_and_write": false, 00:32:13.184 "abort": false, 00:32:13.184 "seek_hole": true, 00:32:13.184 "seek_data": true, 00:32:13.184 "copy": false, 00:32:13.184 "nvme_iov_md": false 00:32:13.184 }, 00:32:13.184 "driver_specific": { 00:32:13.184 "lvol": { 00:32:13.184 "lvol_store_uuid": "df3ada73-4ded-4d8d-9b20-4b7327e023b0", 00:32:13.184 "base_bdev": "nvme0n1", 00:32:13.184 "thin_provision": true, 00:32:13.184 "num_allocated_clusters": 0, 00:32:13.184 "snapshot": false, 00:32:13.184 "clone": false, 00:32:13.184 "esnap_clone": false 00:32:13.184 } 00:32:13.184 } 00:32:13.184 } 00:32:13.184 ]' 00:32:13.184 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:13.184 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:13.184 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:13.184 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:13.184 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:13.184 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:13.184 23:12:52 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:32:13.184 23:12:52 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:32:13.184 23:12:52 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:13.443 23:12:52 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:13.443 23:12:52 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:13.443 23:12:52 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:13.443 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:13.443 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:13.443 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:13.443 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:13.443 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:13.702 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:13.702 { 00:32:13.702 "name": "09e9415f-6f7b-426b-8fe8-1f97c88b37ec", 00:32:13.702 "aliases": [ 00:32:13.702 "lvs/nvme0n1p0" 00:32:13.702 ], 00:32:13.702 "product_name": "Logical Volume", 00:32:13.702 "block_size": 4096, 00:32:13.702 "num_blocks": 26476544, 00:32:13.702 "uuid": "09e9415f-6f7b-426b-8fe8-1f97c88b37ec", 00:32:13.702 "assigned_rate_limits": { 00:32:13.702 "rw_ios_per_sec": 0, 00:32:13.702 "rw_mbytes_per_sec": 0, 00:32:13.702 "r_mbytes_per_sec": 0, 00:32:13.702 "w_mbytes_per_sec": 0 00:32:13.702 }, 00:32:13.702 "claimed": false, 00:32:13.702 "zoned": false, 00:32:13.702 "supported_io_types": { 00:32:13.702 "read": true, 00:32:13.702 "write": true, 00:32:13.702 "unmap": true, 00:32:13.702 "flush": false, 00:32:13.702 "reset": true, 00:32:13.702 "nvme_admin": false, 00:32:13.702 "nvme_io": false, 00:32:13.702 "nvme_io_md": false, 00:32:13.702 "write_zeroes": true, 00:32:13.702 "zcopy": false, 00:32:13.702 "get_zone_info": false, 00:32:13.702 "zone_management": false, 00:32:13.702 "zone_append": false, 00:32:13.702 "compare": false, 00:32:13.702 "compare_and_write": false, 00:32:13.702 "abort": false, 00:32:13.702 "seek_hole": true, 00:32:13.702 "seek_data": true, 00:32:13.702 "copy": false, 00:32:13.702 "nvme_iov_md": false 00:32:13.702 }, 00:32:13.702 "driver_specific": { 00:32:13.702 "lvol": { 00:32:13.702 "lvol_store_uuid": "df3ada73-4ded-4d8d-9b20-4b7327e023b0", 00:32:13.702 "base_bdev": "nvme0n1", 00:32:13.702 "thin_provision": true, 00:32:13.702 "num_allocated_clusters": 0, 00:32:13.702 "snapshot": false, 00:32:13.702 "clone": false, 00:32:13.702 "esnap_clone": false 00:32:13.702 } 00:32:13.702 } 00:32:13.702 } 00:32:13.702 ]' 00:32:13.702 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:13.702 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:13.702 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:13.702 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:13.702 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:13.702 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:13.702 23:12:52 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:32:13.702 23:12:52 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:13.961 23:12:52 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:32:13.961 23:12:52 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:13.961 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:13.961 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:13.961 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:32:13.961 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:32:13.961 23:12:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 09e9415f-6f7b-426b-8fe8-1f97c88b37ec 00:32:13.961 23:12:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:13.961 { 00:32:13.961 "name": "09e9415f-6f7b-426b-8fe8-1f97c88b37ec", 00:32:13.961 "aliases": [ 00:32:13.961 "lvs/nvme0n1p0" 00:32:13.961 ], 00:32:13.961 "product_name": "Logical Volume", 00:32:13.961 "block_size": 4096, 00:32:13.961 "num_blocks": 26476544, 00:32:13.961 "uuid": "09e9415f-6f7b-426b-8fe8-1f97c88b37ec", 00:32:13.961 "assigned_rate_limits": { 00:32:13.961 "rw_ios_per_sec": 0, 00:32:13.961 "rw_mbytes_per_sec": 0, 00:32:13.961 "r_mbytes_per_sec": 0, 00:32:13.961 "w_mbytes_per_sec": 0 00:32:13.961 }, 00:32:13.961 "claimed": false, 00:32:13.961 "zoned": false, 00:32:13.961 "supported_io_types": { 00:32:13.961 "read": true, 00:32:13.961 "write": true, 00:32:13.961 "unmap": true, 00:32:13.961 "flush": false, 00:32:13.961 "reset": true, 00:32:13.961 "nvme_admin": false, 00:32:13.961 "nvme_io": false, 00:32:13.961 "nvme_io_md": false, 00:32:13.961 "write_zeroes": true, 00:32:13.961 "zcopy": false, 00:32:13.961 "get_zone_info": false, 00:32:13.961 "zone_management": false, 00:32:13.961 "zone_append": false, 00:32:13.961 "compare": false, 00:32:13.961 "compare_and_write": false, 00:32:13.961 "abort": false, 00:32:13.961 "seek_hole": true, 00:32:13.961 "seek_data": true, 00:32:13.961 "copy": false, 00:32:13.961 "nvme_iov_md": false 00:32:13.961 }, 00:32:13.961 "driver_specific": { 00:32:13.961 "lvol": { 00:32:13.961 "lvol_store_uuid": "df3ada73-4ded-4d8d-9b20-4b7327e023b0", 00:32:13.961 "base_bdev": "nvme0n1", 00:32:13.961 "thin_provision": true, 00:32:13.961 "num_allocated_clusters": 0, 00:32:13.961 "snapshot": false, 00:32:13.961 "clone": false, 00:32:13.961 "esnap_clone": false 00:32:13.961 } 00:32:13.961 } 00:32:13.961 } 00:32:13.961 ]' 00:32:13.961 23:12:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 09e9415f-6f7b-426b-8fe8-1f97c88b37ec --l2p_dram_limit 10' 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:32:14.221 23:12:53 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 09e9415f-6f7b-426b-8fe8-1f97c88b37ec --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:32:14.221 [2024-12-13 23:12:53.334822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.221 [2024-12-13 23:12:53.334854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:14.221 [2024-12-13 23:12:53.334867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:14.221 [2024-12-13 23:12:53.334873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.221 [2024-12-13 23:12:53.334918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.221 [2024-12-13 23:12:53.334925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:14.221 [2024-12-13 23:12:53.334933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:14.221 [2024-12-13 23:12:53.334939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.221 [2024-12-13 23:12:53.334959] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:14.221 [2024-12-13 23:12:53.335539] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:14.221 [2024-12-13 23:12:53.335555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.221 [2024-12-13 23:12:53.335562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:14.221 [2024-12-13 23:12:53.335571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:32:14.221 [2024-12-13 23:12:53.335577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.221 [2024-12-13 23:12:53.335601] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2 00:32:14.221 [2024-12-13 23:12:53.336531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.221 [2024-12-13 23:12:53.336555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:14.221 [2024-12-13 23:12:53.336563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:14.221 [2024-12-13 23:12:53.336572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.221 [2024-12-13 23:12:53.341215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.221 [2024-12-13 23:12:53.341245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:14.221 [2024-12-13 23:12:53.341253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.606 ms 00:32:14.221 [2024-12-13 23:12:53.341259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.221 [2024-12-13 23:12:53.341361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.221 [2024-12-13 23:12:53.341371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:14.221 [2024-12-13 23:12:53.341378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:32:14.221 [2024-12-13 23:12:53.341388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.221 [2024-12-13 23:12:53.341429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.222 [2024-12-13 23:12:53.341438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:14.222 [2024-12-13 23:12:53.341444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:14.222 [2024-12-13 23:12:53.341454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.222 [2024-12-13 23:12:53.341471] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:14.222 [2024-12-13 23:12:53.344378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.222 [2024-12-13 23:12:53.344402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:14.222 [2024-12-13 23:12:53.344412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.910 ms 00:32:14.222 [2024-12-13 23:12:53.344418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.222 [2024-12-13 23:12:53.344445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.222 [2024-12-13 23:12:53.344452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:14.222 [2024-12-13 23:12:53.344460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:14.222 [2024-12-13 23:12:53.344466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.222 [2024-12-13 23:12:53.344479] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:14.222 [2024-12-13 23:12:53.344586] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:14.222 [2024-12-13 23:12:53.344599] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:14.222 [2024-12-13 23:12:53.344607] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:14.222 [2024-12-13 23:12:53.344617] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:14.222 [2024-12-13 23:12:53.344623] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:14.222 [2024-12-13 23:12:53.344631] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:14.222 [2024-12-13 23:12:53.344637] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:14.222 [2024-12-13 23:12:53.344646] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:14.222 [2024-12-13 23:12:53.344651] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:14.222 [2024-12-13 23:12:53.344659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.222 [2024-12-13 23:12:53.344670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:14.222 [2024-12-13 23:12:53.344677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:32:14.222 [2024-12-13 23:12:53.344683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.222 [2024-12-13 23:12:53.344750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.222 [2024-12-13 23:12:53.344765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:14.222 [2024-12-13 23:12:53.344773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:14.222 [2024-12-13 23:12:53.344778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.222 [2024-12-13 23:12:53.344854] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:14.222 [2024-12-13 23:12:53.344862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:14.222 [2024-12-13 23:12:53.344870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:14.222 [2024-12-13 23:12:53.344876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.222 [2024-12-13 23:12:53.344883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:14.222 [2024-12-13 23:12:53.344889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:14.222 [2024-12-13 23:12:53.344896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:14.222 [2024-12-13 23:12:53.344901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:14.222 [2024-12-13 23:12:53.344908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:14.222 [2024-12-13 23:12:53.344913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:14.222 [2024-12-13 23:12:53.344920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:14.222 [2024-12-13 23:12:53.344925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:14.222 [2024-12-13 23:12:53.344932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:14.222 [2024-12-13 23:12:53.344939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:14.222 [2024-12-13 23:12:53.344946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:14.222 [2024-12-13 23:12:53.344951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.222 [2024-12-13 23:12:53.344958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:14.222 [2024-12-13 23:12:53.344963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:14.222 [2024-12-13 23:12:53.344969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.222 [2024-12-13 23:12:53.344974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:14.222 [2024-12-13 23:12:53.344981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:14.222 [2024-12-13 23:12:53.344986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:14.222 [2024-12-13 23:12:53.344993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:14.222 [2024-12-13 23:12:53.344998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:14.222 [2024-12-13 23:12:53.345004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:14.222 [2024-12-13 23:12:53.345009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:14.222 [2024-12-13 23:12:53.345015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:14.222 [2024-12-13 23:12:53.345020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:14.222 [2024-12-13 23:12:53.345026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:14.222 [2024-12-13 23:12:53.345031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:14.222 [2024-12-13 23:12:53.345037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:14.222 [2024-12-13 23:12:53.345041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:14.222 [2024-12-13 23:12:53.345049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:14.222 [2024-12-13 23:12:53.345054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:14.222 [2024-12-13 23:12:53.345060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:14.222 [2024-12-13 23:12:53.345065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:14.222 [2024-12-13 23:12:53.345071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:14.222 [2024-12-13 23:12:53.345076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:14.222 [2024-12-13 23:12:53.345083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:14.222 [2024-12-13 23:12:53.345088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.222 [2024-12-13 23:12:53.345094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:14.222 [2024-12-13 23:12:53.345099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:14.222 [2024-12-13 23:12:53.345105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.222 [2024-12-13 23:12:53.345110] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:14.222 [2024-12-13 23:12:53.345117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:14.222 [2024-12-13 23:12:53.345124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:14.222 [2024-12-13 23:12:53.345131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.222 [2024-12-13 23:12:53.345137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:14.222 [2024-12-13 23:12:53.345188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:14.222 [2024-12-13 23:12:53.345193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:14.222 [2024-12-13 23:12:53.345201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:14.222 [2024-12-13 23:12:53.345207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:14.222 [2024-12-13 23:12:53.345213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:14.222 [2024-12-13 23:12:53.345219] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:14.222 [2024-12-13 23:12:53.345227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:14.222 [2024-12-13 23:12:53.345236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:14.222 [2024-12-13 23:12:53.345242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:14.222 [2024-12-13 23:12:53.345248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:14.222 [2024-12-13 23:12:53.345254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:14.222 [2024-12-13 23:12:53.345260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:14.222 [2024-12-13 23:12:53.345266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:14.222 [2024-12-13 23:12:53.345271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:14.222 [2024-12-13 23:12:53.345278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:14.222 [2024-12-13 23:12:53.345283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:14.222 [2024-12-13 23:12:53.345293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:14.222 [2024-12-13 23:12:53.345298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:14.222 [2024-12-13 23:12:53.345305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:14.222 [2024-12-13 23:12:53.345310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:14.222 [2024-12-13 23:12:53.345316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:14.222 [2024-12-13 23:12:53.345321] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:14.222 [2024-12-13 23:12:53.345329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:14.222 [2024-12-13 23:12:53.345336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:14.223 [2024-12-13 23:12:53.345342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:14.223 [2024-12-13 23:12:53.345348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:14.223 [2024-12-13 23:12:53.345355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:14.223 [2024-12-13 23:12:53.345361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.223 [2024-12-13 23:12:53.345367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:14.223 [2024-12-13 23:12:53.345375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:32:14.223 [2024-12-13 23:12:53.345382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.223 [2024-12-13 23:12:53.345421] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:14.223 [2024-12-13 23:12:53.345432] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:18.431 [2024-12-13 23:12:56.860826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.431 [2024-12-13 23:12:56.861129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:18.431 [2024-12-13 23:12:56.861159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3515.390 ms 00:32:18.431 [2024-12-13 23:12:56.861172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.431 [2024-12-13 23:12:56.893569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.431 [2024-12-13 23:12:56.893634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:18.431 [2024-12-13 23:12:56.893649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.148 ms 00:32:18.431 [2024-12-13 23:12:56.893662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.431 [2024-12-13 23:12:56.893839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:56.893856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:18.432 [2024-12-13 23:12:56.893866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:32:18.432 [2024-12-13 23:12:56.893885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:56.929623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:56.929674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:18.432 [2024-12-13 23:12:56.929686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.701 ms 00:32:18.432 [2024-12-13 23:12:56.929697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:56.929734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:56.929750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:18.432 [2024-12-13 23:12:56.929785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:18.432 [2024-12-13 23:12:56.929803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:56.930378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:56.930420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:18.432 [2024-12-13 23:12:56.930435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:32:18.432 [2024-12-13 23:12:56.930446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:56.930567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:56.930580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:18.432 [2024-12-13 23:12:56.930592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:32:18.432 [2024-12-13 23:12:56.930605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:56.948455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:56.948682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:18.432 [2024-12-13 23:12:56.948705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.829 ms 00:32:18.432 [2024-12-13 23:12:56.948716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:56.975822] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:18.432 [2024-12-13 23:12:56.979786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:56.979832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:18.432 [2024-12-13 23:12:56.979848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.930 ms 00:32:18.432 [2024-12-13 23:12:56.979858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.085543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:57.085844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:18.432 [2024-12-13 23:12:57.085879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.628 ms 00:32:18.432 [2024-12-13 23:12:57.085890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.086104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:57.086121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:18.432 [2024-12-13 23:12:57.086138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:32:18.432 [2024-12-13 23:12:57.086147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.112630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:57.112694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:18.432 [2024-12-13 23:12:57.112711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.419 ms 00:32:18.432 [2024-12-13 23:12:57.112721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.138204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:57.138252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:18.432 [2024-12-13 23:12:57.138268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.400 ms 00:32:18.432 [2024-12-13 23:12:57.138276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.138943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:57.138960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:18.432 [2024-12-13 23:12:57.138974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:32:18.432 [2024-12-13 23:12:57.138984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.226982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:57.227041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:18.432 [2024-12-13 23:12:57.227063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.929 ms 00:32:18.432 [2024-12-13 23:12:57.227072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.254213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:57.254265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:18.432 [2024-12-13 23:12:57.254282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.037 ms 00:32:18.432 [2024-12-13 23:12:57.254290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.280238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:57.280456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:18.432 [2024-12-13 23:12:57.280485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.890 ms 00:32:18.432 [2024-12-13 23:12:57.280493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.307240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:57.307442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:18.432 [2024-12-13 23:12:57.307496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.396 ms 00:32:18.432 [2024-12-13 23:12:57.307506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.307557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:57.307569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:18.432 [2024-12-13 23:12:57.307585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:18.432 [2024-12-13 23:12:57.307593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.307694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.432 [2024-12-13 23:12:57.307709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:18.432 [2024-12-13 23:12:57.307721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:18.432 [2024-12-13 23:12:57.307730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.432 [2024-12-13 23:12:57.308923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3973.576 ms, result 0 00:32:18.432 { 00:32:18.432 "name": "ftl0", 00:32:18.432 "uuid": "29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2" 00:32:18.432 } 00:32:18.432 23:12:57 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:32:18.432 23:12:57 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:18.432 23:12:57 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:32:18.432 23:12:57 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:18.694 [2024-12-13 23:12:57.736238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.694 [2024-12-13 23:12:57.736291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:18.694 [2024-12-13 23:12:57.736304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:18.694 [2024-12-13 23:12:57.736314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.694 [2024-12-13 23:12:57.736338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:18.694 [2024-12-13 23:12:57.739037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.694 [2024-12-13 23:12:57.739068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:18.694 [2024-12-13 23:12:57.739081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.680 ms 00:32:18.694 [2024-12-13 23:12:57.739089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.694 [2024-12-13 23:12:57.739360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.694 [2024-12-13 23:12:57.739376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:18.694 [2024-12-13 23:12:57.739386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:32:18.694 [2024-12-13 23:12:57.739394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.694 [2024-12-13 23:12:57.742652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.694 [2024-12-13 23:12:57.742805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:18.694 [2024-12-13 23:12:57.742825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.241 ms 00:32:18.694 [2024-12-13 23:12:57.742834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.694 [2024-12-13 23:12:57.748951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.694 [2024-12-13 23:12:57.748979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:18.694 [2024-12-13 23:12:57.748993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.091 ms 00:32:18.694 [2024-12-13 23:12:57.749004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.694 [2024-12-13 23:12:57.773369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.695 [2024-12-13 23:12:57.773404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:18.695 [2024-12-13 23:12:57.773417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.306 ms 00:32:18.695 [2024-12-13 23:12:57.773425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.695 [2024-12-13 23:12:57.789129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.695 [2024-12-13 23:12:57.789271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:18.695 [2024-12-13 23:12:57.789293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.659 ms 00:32:18.695 [2024-12-13 23:12:57.789302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.695 [2024-12-13 23:12:57.789453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.695 [2024-12-13 23:12:57.789465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:18.695 [2024-12-13 23:12:57.789476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:32:18.695 [2024-12-13 23:12:57.789484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.695 [2024-12-13 23:12:57.813300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.695 [2024-12-13 23:12:57.813333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:18.695 [2024-12-13 23:12:57.813346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.793 ms 00:32:18.695 [2024-12-13 23:12:57.813353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.958 [2024-12-13 23:12:57.836893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.958 [2024-12-13 23:12:57.836936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:18.958 [2024-12-13 23:12:57.836949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.497 ms 00:32:18.958 [2024-12-13 23:12:57.836956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.958 [2024-12-13 23:12:57.860232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.958 [2024-12-13 23:12:57.860384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:18.958 [2024-12-13 23:12:57.860408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.229 ms 00:32:18.958 [2024-12-13 23:12:57.860416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.958 [2024-12-13 23:12:57.884386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.958 [2024-12-13 23:12:57.884430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:18.958 [2024-12-13 23:12:57.884445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.885 ms 00:32:18.958 [2024-12-13 23:12:57.884452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.958 [2024-12-13 23:12:57.884502] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:18.958 [2024-12-13 23:12:57.884520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.884993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:18.958 [2024-12-13 23:12:57.885237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:18.959 [2024-12-13 23:12:57.885502] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:18.959 [2024-12-13 23:12:57.885512] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2 00:32:18.959 [2024-12-13 23:12:57.885521] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:18.959 [2024-12-13 23:12:57.885532] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:18.959 [2024-12-13 23:12:57.885542] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:18.959 [2024-12-13 23:12:57.885552] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:18.959 [2024-12-13 23:12:57.885559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:18.959 [2024-12-13 23:12:57.885568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:18.959 [2024-12-13 23:12:57.885576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:18.959 [2024-12-13 23:12:57.885584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:18.959 [2024-12-13 23:12:57.885590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:18.959 [2024-12-13 23:12:57.885600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.959 [2024-12-13 23:12:57.885608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:18.959 [2024-12-13 23:12:57.885619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:32:18.959 [2024-12-13 23:12:57.885629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.959 [2024-12-13 23:12:57.899606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.959 [2024-12-13 23:12:57.899776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:18.959 [2024-12-13 23:12:57.899800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.930 ms 00:32:18.959 [2024-12-13 23:12:57.899809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.959 [2024-12-13 23:12:57.900211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.959 [2024-12-13 23:12:57.900223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:18.959 [2024-12-13 23:12:57.900237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:32:18.959 [2024-12-13 23:12:57.900246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.959 [2024-12-13 23:12:57.946388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.959 [2024-12-13 23:12:57.946432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:18.959 [2024-12-13 23:12:57.946448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.959 [2024-12-13 23:12:57.946457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.959 [2024-12-13 23:12:57.946528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.959 [2024-12-13 23:12:57.946537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:18.959 [2024-12-13 23:12:57.946550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.959 [2024-12-13 23:12:57.946559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.959 [2024-12-13 23:12:57.946641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.959 [2024-12-13 23:12:57.946653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:18.959 [2024-12-13 23:12:57.946666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.959 [2024-12-13 23:12:57.946674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.959 [2024-12-13 23:12:57.946698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.959 [2024-12-13 23:12:57.946707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:18.959 [2024-12-13 23:12:57.946717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.959 [2024-12-13 23:12:57.946729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.959 [2024-12-13 23:12:58.031881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:18.959 [2024-12-13 23:12:58.031934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:18.959 [2024-12-13 23:12:58.031949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:18.959 [2024-12-13 23:12:58.031958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.220 [2024-12-13 23:12:58.102087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.220 [2024-12-13 23:12:58.102143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:19.220 [2024-12-13 23:12:58.102159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.220 [2024-12-13 23:12:58.102171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.220 [2024-12-13 23:12:58.102282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.220 [2024-12-13 23:12:58.102293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:19.220 [2024-12-13 23:12:58.102304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.220 [2024-12-13 23:12:58.102312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.220 [2024-12-13 23:12:58.102366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.220 [2024-12-13 23:12:58.102378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:19.220 [2024-12-13 23:12:58.102389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.220 [2024-12-13 23:12:58.102398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.220 [2024-12-13 23:12:58.102501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.220 [2024-12-13 23:12:58.102515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:19.220 [2024-12-13 23:12:58.102526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.220 [2024-12-13 23:12:58.102535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.220 [2024-12-13 23:12:58.102578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.220 [2024-12-13 23:12:58.102588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:19.220 [2024-12-13 23:12:58.102599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.221 [2024-12-13 23:12:58.102607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.221 [2024-12-13 23:12:58.102657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.221 [2024-12-13 23:12:58.102667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:19.221 [2024-12-13 23:12:58.102677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.221 [2024-12-13 23:12:58.102685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.221 [2024-12-13 23:12:58.102739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:19.221 [2024-12-13 23:12:58.102751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:19.221 [2024-12-13 23:12:58.102803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:19.221 [2024-12-13 23:12:58.102812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.221 [2024-12-13 23:12:58.102988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.680 ms, result 0 00:32:19.221 true 00:32:19.221 23:12:58 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 86467 00:32:19.221 23:12:58 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 86467 ']' 00:32:19.221 23:12:58 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 86467 00:32:19.221 23:12:58 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:32:19.221 23:12:58 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:19.221 23:12:58 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86467 00:32:19.221 23:12:58 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:19.221 killing process with pid 86467 00:32:19.221 23:12:58 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:19.221 23:12:58 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86467' 00:32:19.221 23:12:58 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 86467 00:32:19.221 23:12:58 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 86467 00:32:23.431 23:13:02 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:27.635 262144+0 records in 00:32:27.635 262144+0 records out 00:32:27.635 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.08175 s, 263 MB/s 00:32:27.635 23:13:06 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:29.553 23:13:08 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:29.553 [2024-12-13 23:13:08.520519] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:29.553 [2024-12-13 23:13:08.520659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86707 ] 00:32:29.553 [2024-12-13 23:13:08.683309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.814 [2024-12-13 23:13:08.808062] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.076 [2024-12-13 23:13:09.104386] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:30.076 [2024-12-13 23:13:09.104475] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:30.339 [2024-12-13 23:13:09.265693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.339 [2024-12-13 23:13:09.265781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:30.339 [2024-12-13 23:13:09.265798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:30.339 [2024-12-13 23:13:09.265808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.339 [2024-12-13 23:13:09.265869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.339 [2024-12-13 23:13:09.265882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:30.339 [2024-12-13 23:13:09.265891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:30.339 [2024-12-13 23:13:09.265900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.339 [2024-12-13 23:13:09.265949] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:30.339 [2024-12-13 23:13:09.266729] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:30.339 [2024-12-13 23:13:09.266751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.339 [2024-12-13 23:13:09.266776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:30.339 [2024-12-13 23:13:09.266787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:32:30.339 [2024-12-13 23:13:09.266795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.339 [2024-12-13 23:13:09.268495] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:30.339 [2024-12-13 23:13:09.283039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.339 [2024-12-13 23:13:09.283091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:30.339 [2024-12-13 23:13:09.283106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.546 ms 00:32:30.339 [2024-12-13 23:13:09.283114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.339 [2024-12-13 23:13:09.283196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.339 [2024-12-13 23:13:09.283207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:30.339 [2024-12-13 23:13:09.283215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:30.339 [2024-12-13 23:13:09.283223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.339 [2024-12-13 23:13:09.291475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.339 [2024-12-13 23:13:09.291681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:30.339 [2024-12-13 23:13:09.291701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.174 ms 00:32:30.339 [2024-12-13 23:13:09.291718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.339 [2024-12-13 23:13:09.291825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.339 [2024-12-13 23:13:09.291835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:30.339 [2024-12-13 23:13:09.291845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:32:30.339 [2024-12-13 23:13:09.291852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.339 [2024-12-13 23:13:09.291903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.339 [2024-12-13 23:13:09.291915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:30.339 [2024-12-13 23:13:09.291923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:30.339 [2024-12-13 23:13:09.291931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.339 [2024-12-13 23:13:09.291958] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:30.339 [2024-12-13 23:13:09.295943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.339 [2024-12-13 23:13:09.295986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:30.339 [2024-12-13 23:13:09.296000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.991 ms 00:32:30.339 [2024-12-13 23:13:09.296008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.339 [2024-12-13 23:13:09.296048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.339 [2024-12-13 23:13:09.296057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:30.339 [2024-12-13 23:13:09.296067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:30.339 [2024-12-13 23:13:09.296074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.339 [2024-12-13 23:13:09.296127] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:30.339 [2024-12-13 23:13:09.296153] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:30.339 [2024-12-13 23:13:09.296190] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:30.339 [2024-12-13 23:13:09.296211] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:30.339 [2024-12-13 23:13:09.296318] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:30.339 [2024-12-13 23:13:09.296332] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:30.339 [2024-12-13 23:13:09.296345] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:30.339 [2024-12-13 23:13:09.296355] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:30.339 [2024-12-13 23:13:09.296365] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:30.339 [2024-12-13 23:13:09.296373] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:30.339 [2024-12-13 23:13:09.296382] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:30.339 [2024-12-13 23:13:09.296389] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:30.339 [2024-12-13 23:13:09.296403] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:30.339 [2024-12-13 23:13:09.296412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.339 [2024-12-13 23:13:09.296420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:30.340 [2024-12-13 23:13:09.296429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:32:30.340 [2024-12-13 23:13:09.296437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.340 [2024-12-13 23:13:09.296520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.340 [2024-12-13 23:13:09.296537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:30.340 [2024-12-13 23:13:09.296545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:30.340 [2024-12-13 23:13:09.296553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.340 [2024-12-13 23:13:09.296656] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:30.340 [2024-12-13 23:13:09.296669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:30.340 [2024-12-13 23:13:09.296678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:30.340 [2024-12-13 23:13:09.296686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:30.340 [2024-12-13 23:13:09.296695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:30.340 [2024-12-13 23:13:09.296702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:30.340 [2024-12-13 23:13:09.296712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:30.340 [2024-12-13 23:13:09.296721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:30.340 [2024-12-13 23:13:09.296728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:30.340 [2024-12-13 23:13:09.296736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:30.340 [2024-12-13 23:13:09.296743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:30.340 [2024-12-13 23:13:09.296752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:30.340 [2024-12-13 23:13:09.296789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:30.340 [2024-12-13 23:13:09.296807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:30.340 [2024-12-13 23:13:09.296815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:30.340 [2024-12-13 23:13:09.296822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:30.340 [2024-12-13 23:13:09.296830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:30.340 [2024-12-13 23:13:09.296837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:30.340 [2024-12-13 23:13:09.296845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:30.340 [2024-12-13 23:13:09.296853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:30.340 [2024-12-13 23:13:09.296860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:30.340 [2024-12-13 23:13:09.296867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:30.340 [2024-12-13 23:13:09.296873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:30.340 [2024-12-13 23:13:09.296880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:30.340 [2024-12-13 23:13:09.296887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:30.340 [2024-12-13 23:13:09.296894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:30.340 [2024-12-13 23:13:09.296902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:30.340 [2024-12-13 23:13:09.296910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:30.340 [2024-12-13 23:13:09.296916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:30.340 [2024-12-13 23:13:09.296923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:30.340 [2024-12-13 23:13:09.296930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:30.340 [2024-12-13 23:13:09.296937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:30.340 [2024-12-13 23:13:09.296946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:30.340 [2024-12-13 23:13:09.296953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:30.340 [2024-12-13 23:13:09.296959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:30.340 [2024-12-13 23:13:09.296966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:30.340 [2024-12-13 23:13:09.296972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:30.340 [2024-12-13 23:13:09.296979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:30.340 [2024-12-13 23:13:09.296986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:30.340 [2024-12-13 23:13:09.296993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:30.340 [2024-12-13 23:13:09.297000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:30.340 [2024-12-13 23:13:09.297007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:30.340 [2024-12-13 23:13:09.297014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:30.340 [2024-12-13 23:13:09.297020] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:30.340 [2024-12-13 23:13:09.297028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:30.340 [2024-12-13 23:13:09.297039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:30.340 [2024-12-13 23:13:09.297049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:30.340 [2024-12-13 23:13:09.297056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:30.340 [2024-12-13 23:13:09.297063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:30.340 [2024-12-13 23:13:09.297070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:30.340 [2024-12-13 23:13:09.297078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:30.340 [2024-12-13 23:13:09.297085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:30.340 [2024-12-13 23:13:09.297092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:30.340 [2024-12-13 23:13:09.297101] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:30.340 [2024-12-13 23:13:09.297110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:30.340 [2024-12-13 23:13:09.297128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:30.340 [2024-12-13 23:13:09.297138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:30.340 [2024-12-13 23:13:09.297145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:30.340 [2024-12-13 23:13:09.297152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:30.340 [2024-12-13 23:13:09.297159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:30.340 [2024-12-13 23:13:09.297166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:30.340 [2024-12-13 23:13:09.297174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:30.340 [2024-12-13 23:13:09.297183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:30.340 [2024-12-13 23:13:09.297192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:30.340 [2024-12-13 23:13:09.297199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:30.340 [2024-12-13 23:13:09.297207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:30.340 [2024-12-13 23:13:09.297214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:30.340 [2024-12-13 23:13:09.297222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:30.340 [2024-12-13 23:13:09.297231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:30.340 [2024-12-13 23:13:09.297238] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:30.340 [2024-12-13 23:13:09.297246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:30.340 [2024-12-13 23:13:09.297254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:30.340 [2024-12-13 23:13:09.297261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:30.340 [2024-12-13 23:13:09.297268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:30.340 [2024-12-13 23:13:09.297275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:30.340 [2024-12-13 23:13:09.297283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.340 [2024-12-13 23:13:09.297291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:30.340 [2024-12-13 23:13:09.297300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:32:30.340 [2024-12-13 23:13:09.297308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.340 [2024-12-13 23:13:09.329260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.340 [2024-12-13 23:13:09.329311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:30.340 [2024-12-13 23:13:09.329324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.904 ms 00:32:30.340 [2024-12-13 23:13:09.329338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.340 [2024-12-13 23:13:09.329434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.340 [2024-12-13 23:13:09.329444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:30.340 [2024-12-13 23:13:09.329454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:30.340 [2024-12-13 23:13:09.329462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.340 [2024-12-13 23:13:09.378578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.340 [2024-12-13 23:13:09.378629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:30.340 [2024-12-13 23:13:09.378644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.053 ms 00:32:30.340 [2024-12-13 23:13:09.378653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.340 [2024-12-13 23:13:09.378705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.340 [2024-12-13 23:13:09.378715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:30.340 [2024-12-13 23:13:09.378728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:30.340 [2024-12-13 23:13:09.378737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.340 [2024-12-13 23:13:09.379369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.340 [2024-12-13 23:13:09.379395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:30.341 [2024-12-13 23:13:09.379406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:32:30.341 [2024-12-13 23:13:09.379415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.341 [2024-12-13 23:13:09.379593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.341 [2024-12-13 23:13:09.379606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:30.341 [2024-12-13 23:13:09.379619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:32:30.341 [2024-12-13 23:13:09.379627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.341 [2024-12-13 23:13:09.395406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.341 [2024-12-13 23:13:09.395473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:30.341 [2024-12-13 23:13:09.395485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.757 ms 00:32:30.341 [2024-12-13 23:13:09.395493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.341 [2024-12-13 23:13:09.410080] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:30.341 [2024-12-13 23:13:09.410284] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:30.341 [2024-12-13 23:13:09.410304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.341 [2024-12-13 23:13:09.410314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:30.341 [2024-12-13 23:13:09.410325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.698 ms 00:32:30.341 [2024-12-13 23:13:09.410332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.341 [2024-12-13 23:13:09.436737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.341 [2024-12-13 23:13:09.436990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:30.341 [2024-12-13 23:13:09.437014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.082 ms 00:32:30.341 [2024-12-13 23:13:09.437024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.341 [2024-12-13 23:13:09.450275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.341 [2024-12-13 23:13:09.450324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:30.341 [2024-12-13 23:13:09.450337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.193 ms 00:32:30.341 [2024-12-13 23:13:09.450346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.341 [2024-12-13 23:13:09.462916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.341 [2024-12-13 23:13:09.462960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:30.341 [2024-12-13 23:13:09.462972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.522 ms 00:32:30.341 [2024-12-13 23:13:09.462980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.341 [2024-12-13 23:13:09.463642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.341 [2024-12-13 23:13:09.463673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:30.341 [2024-12-13 23:13:09.463685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:32:30.341 [2024-12-13 23:13:09.463697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.603 [2024-12-13 23:13:09.528977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.603 [2024-12-13 23:13:09.529043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:30.603 [2024-12-13 23:13:09.529059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.260 ms 00:32:30.603 [2024-12-13 23:13:09.529074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.603 [2024-12-13 23:13:09.540174] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:30.603 [2024-12-13 23:13:09.543284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.603 [2024-12-13 23:13:09.543327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:30.603 [2024-12-13 23:13:09.543340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.142 ms 00:32:30.603 [2024-12-13 23:13:09.543350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.603 [2024-12-13 23:13:09.543440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.603 [2024-12-13 23:13:09.543469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:30.603 [2024-12-13 23:13:09.543481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:30.603 [2024-12-13 23:13:09.543489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.603 [2024-12-13 23:13:09.543569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.603 [2024-12-13 23:13:09.543581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:30.603 [2024-12-13 23:13:09.543590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:30.603 [2024-12-13 23:13:09.543598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.603 [2024-12-13 23:13:09.543623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.603 [2024-12-13 23:13:09.543632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:30.603 [2024-12-13 23:13:09.543641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:30.603 [2024-12-13 23:13:09.543649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.603 [2024-12-13 23:13:09.543685] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:30.603 [2024-12-13 23:13:09.543698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.603 [2024-12-13 23:13:09.543706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:30.603 [2024-12-13 23:13:09.543715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:30.603 [2024-12-13 23:13:09.543724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.603 [2024-12-13 23:13:09.569818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.603 [2024-12-13 23:13:09.569987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:30.603 [2024-12-13 23:13:09.570050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.071 ms 00:32:30.603 [2024-12-13 23:13:09.570082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.603 [2024-12-13 23:13:09.570494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:30.603 [2024-12-13 23:13:09.570600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:30.603 [2024-12-13 23:13:09.570617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:30.603 [2024-12-13 23:13:09.570626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:30.603 [2024-12-13 23:13:09.572631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.788 ms, result 0 00:32:31.576  [2024-12-13T23:13:11.653Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-13T23:13:12.594Z] Copying: 37/1024 [MB] (27 MBps) [2024-12-13T23:13:13.979Z] Copying: 52/1024 [MB] (14 MBps) [2024-12-13T23:13:14.921Z] Copying: 71/1024 [MB] (19 MBps) [2024-12-13T23:13:15.866Z] Copying: 91/1024 [MB] (19 MBps) [2024-12-13T23:13:16.809Z] Copying: 113/1024 [MB] (22 MBps) [2024-12-13T23:13:17.750Z] Copying: 132/1024 [MB] (18 MBps) [2024-12-13T23:13:18.689Z] Copying: 151/1024 [MB] (19 MBps) [2024-12-13T23:13:19.630Z] Copying: 176/1024 [MB] (24 MBps) [2024-12-13T23:13:21.014Z] Copying: 198/1024 [MB] (21 MBps) [2024-12-13T23:13:21.587Z] Copying: 216/1024 [MB] (18 MBps) [2024-12-13T23:13:22.974Z] Copying: 234/1024 [MB] (17 MBps) [2024-12-13T23:13:23.915Z] Copying: 253/1024 [MB] (19 MBps) [2024-12-13T23:13:24.859Z] Copying: 270/1024 [MB] (16 MBps) [2024-12-13T23:13:25.824Z] Copying: 290/1024 [MB] (19 MBps) [2024-12-13T23:13:26.769Z] Copying: 303/1024 [MB] (12 MBps) [2024-12-13T23:13:27.715Z] Copying: 314/1024 [MB] (11 MBps) [2024-12-13T23:13:28.659Z] Copying: 325/1024 [MB] (11 MBps) [2024-12-13T23:13:29.603Z] Copying: 337/1024 [MB] (11 MBps) [2024-12-13T23:13:30.989Z] Copying: 349/1024 [MB] (11 MBps) [2024-12-13T23:13:31.933Z] Copying: 360/1024 [MB] (11 MBps) [2024-12-13T23:13:32.878Z] Copying: 372/1024 [MB] (11 MBps) [2024-12-13T23:13:33.823Z] Copying: 383/1024 [MB] (11 MBps) [2024-12-13T23:13:34.826Z] Copying: 394/1024 [MB] (11 MBps) [2024-12-13T23:13:35.771Z] Copying: 414184/1048576 [kB] (9944 kBps) [2024-12-13T23:13:36.717Z] Copying: 415/1024 [MB] (10 MBps) [2024-12-13T23:13:37.663Z] Copying: 426/1024 [MB] (11 MBps) [2024-12-13T23:13:38.607Z] Copying: 438/1024 [MB] (11 MBps) [2024-12-13T23:13:39.994Z] Copying: 449/1024 [MB] (11 MBps) [2024-12-13T23:13:40.939Z] Copying: 461/1024 [MB] (11 MBps) [2024-12-13T23:13:41.882Z] Copying: 472/1024 [MB] (11 MBps) [2024-12-13T23:13:42.828Z] Copying: 483/1024 [MB] (11 MBps) [2024-12-13T23:13:43.772Z] Copying: 497/1024 [MB] (13 MBps) [2024-12-13T23:13:44.716Z] Copying: 509/1024 [MB] (11 MBps) [2024-12-13T23:13:45.666Z] Copying: 520/1024 [MB] (11 MBps) [2024-12-13T23:13:46.611Z] Copying: 532/1024 [MB] (11 MBps) [2024-12-13T23:13:47.999Z] Copying: 543/1024 [MB] (11 MBps) [2024-12-13T23:13:48.942Z] Copying: 554/1024 [MB] (10 MBps) [2024-12-13T23:13:49.887Z] Copying: 566/1024 [MB] (11 MBps) [2024-12-13T23:13:50.831Z] Copying: 577/1024 [MB] (11 MBps) [2024-12-13T23:13:51.776Z] Copying: 589/1024 [MB] (11 MBps) [2024-12-13T23:13:52.719Z] Copying: 601/1024 [MB] (11 MBps) [2024-12-13T23:13:53.661Z] Copying: 613/1024 [MB] (12 MBps) [2024-12-13T23:13:54.607Z] Copying: 625/1024 [MB] (12 MBps) [2024-12-13T23:13:55.996Z] Copying: 637/1024 [MB] (11 MBps) [2024-12-13T23:13:56.963Z] Copying: 648/1024 [MB] (11 MBps) [2024-12-13T23:13:57.934Z] Copying: 659/1024 [MB] (10 MBps) [2024-12-13T23:13:58.879Z] Copying: 669/1024 [MB] (10 MBps) [2024-12-13T23:13:59.825Z] Copying: 680/1024 [MB] (10 MBps) [2024-12-13T23:14:00.769Z] Copying: 691/1024 [MB] (11 MBps) [2024-12-13T23:14:01.713Z] Copying: 703/1024 [MB] (11 MBps) [2024-12-13T23:14:02.658Z] Copying: 715/1024 [MB] (11 MBps) [2024-12-13T23:14:03.600Z] Copying: 728/1024 [MB] (13 MBps) [2024-12-13T23:14:04.987Z] Copying: 739/1024 [MB] (10 MBps) [2024-12-13T23:14:05.930Z] Copying: 749/1024 [MB] (10 MBps) [2024-12-13T23:14:06.874Z] Copying: 761/1024 [MB] (11 MBps) [2024-12-13T23:14:07.819Z] Copying: 773/1024 [MB] (11 MBps) [2024-12-13T23:14:08.761Z] Copying: 783/1024 [MB] (10 MBps) [2024-12-13T23:14:09.706Z] Copying: 794/1024 [MB] (10 MBps) [2024-12-13T23:14:10.648Z] Copying: 804/1024 [MB] (10 MBps) [2024-12-13T23:14:11.592Z] Copying: 816/1024 [MB] (11 MBps) [2024-12-13T23:14:12.977Z] Copying: 828/1024 [MB] (11 MBps) [2024-12-13T23:14:13.919Z] Copying: 839/1024 [MB] (11 MBps) [2024-12-13T23:14:14.863Z] Copying: 851/1024 [MB] (11 MBps) [2024-12-13T23:14:15.806Z] Copying: 862/1024 [MB] (11 MBps) [2024-12-13T23:14:16.751Z] Copying: 874/1024 [MB] (11 MBps) [2024-12-13T23:14:17.695Z] Copying: 884/1024 [MB] (10 MBps) [2024-12-13T23:14:18.641Z] Copying: 895/1024 [MB] (11 MBps) [2024-12-13T23:14:20.068Z] Copying: 907/1024 [MB] (11 MBps) [2024-12-13T23:14:20.685Z] Copying: 918/1024 [MB] (11 MBps) [2024-12-13T23:14:21.629Z] Copying: 929/1024 [MB] (10 MBps) [2024-12-13T23:14:23.020Z] Copying: 940/1024 [MB] (11 MBps) [2024-12-13T23:14:23.593Z] Copying: 952/1024 [MB] (11 MBps) [2024-12-13T23:14:24.982Z] Copying: 963/1024 [MB] (11 MBps) [2024-12-13T23:14:25.928Z] Copying: 973/1024 [MB] (10 MBps) [2024-12-13T23:14:26.871Z] Copying: 985/1024 [MB] (11 MBps) [2024-12-13T23:14:27.813Z] Copying: 996/1024 [MB] (11 MBps) [2024-12-13T23:14:28.758Z] Copying: 1007/1024 [MB] (11 MBps) [2024-12-13T23:14:29.020Z] Copying: 1019/1024 [MB] (11 MBps) [2024-12-13T23:14:29.020Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-13 23:14:28.996667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.880 [2024-12-13 23:14:28.996717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:49.880 [2024-12-13 23:14:28.996729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:49.880 [2024-12-13 23:14:28.996736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.880 [2024-12-13 23:14:28.996754] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:49.880 [2024-12-13 23:14:28.999030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.880 [2024-12-13 23:14:28.999053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:49.880 [2024-12-13 23:14:28.999062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.251 ms 00:33:49.880 [2024-12-13 23:14:28.999072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.880 [2024-12-13 23:14:29.001680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.880 [2024-12-13 23:14:29.001703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:49.880 [2024-12-13 23:14:29.001711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:33:49.880 [2024-12-13 23:14:29.001717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.880 [2024-12-13 23:14:29.001738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.880 [2024-12-13 23:14:29.001746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:49.880 [2024-12-13 23:14:29.001752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:49.880 [2024-12-13 23:14:29.001768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.880 [2024-12-13 23:14:29.001812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.880 [2024-12-13 23:14:29.001819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:49.880 [2024-12-13 23:14:29.001825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:49.880 [2024-12-13 23:14:29.001831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.880 [2024-12-13 23:14:29.001843] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:49.880 [2024-12-13 23:14:29.001853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.001997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:49.880 [2024-12-13 23:14:29.002134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:49.881 [2024-12-13 23:14:29.002430] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:49.881 [2024-12-13 23:14:29.002436] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2 00:33:49.881 [2024-12-13 23:14:29.002442] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:49.881 [2024-12-13 23:14:29.002447] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:49.881 [2024-12-13 23:14:29.002453] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:49.881 [2024-12-13 23:14:29.002461] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:49.881 [2024-12-13 23:14:29.002466] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:49.881 [2024-12-13 23:14:29.002472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:49.881 [2024-12-13 23:14:29.002478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:49.881 [2024-12-13 23:14:29.002483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:49.881 [2024-12-13 23:14:29.002488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:49.881 [2024-12-13 23:14:29.002493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.881 [2024-12-13 23:14:29.002499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:49.881 [2024-12-13 23:14:29.002505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:33:49.881 [2024-12-13 23:14:29.002510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.881 [2024-12-13 23:14:29.012594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.881 [2024-12-13 23:14:29.012624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:49.881 [2024-12-13 23:14:29.012632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.073 ms 00:33:49.881 [2024-12-13 23:14:29.012639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.881 [2024-12-13 23:14:29.012949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.881 [2024-12-13 23:14:29.012964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:49.881 [2024-12-13 23:14:29.012971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:33:49.881 [2024-12-13 23:14:29.012977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.040629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.040656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:50.142 [2024-12-13 23:14:29.040665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.040670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.040722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.040729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:50.142 [2024-12-13 23:14:29.040735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.040741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.040783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.040794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:50.142 [2024-12-13 23:14:29.040801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.040808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.040819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.040825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:50.142 [2024-12-13 23:14:29.040835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.040840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.104915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.104954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:50.142 [2024-12-13 23:14:29.104964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.104971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.156675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.156710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:50.142 [2024-12-13 23:14:29.156720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.156727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.156800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.156808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:50.142 [2024-12-13 23:14:29.156820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.156826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.156854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.156862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:50.142 [2024-12-13 23:14:29.156869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.156876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.156940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.156949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:50.142 [2024-12-13 23:14:29.156961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.156969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.156990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.156997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:50.142 [2024-12-13 23:14:29.157004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.157010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.157045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.157053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:50.142 [2024-12-13 23:14:29.157059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.157067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.157104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:50.142 [2024-12-13 23:14:29.157111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:50.142 [2024-12-13 23:14:29.157118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:50.142 [2024-12-13 23:14:29.157125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.142 [2024-12-13 23:14:29.157231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 160.538 ms, result 0 00:33:51.085 00:33:51.085 00:33:51.085 23:14:29 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:33:51.085 [2024-12-13 23:14:30.001238] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:51.085 [2024-12-13 23:14:30.001360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87520 ] 00:33:51.085 [2024-12-13 23:14:30.154694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.346 [2024-12-13 23:14:30.245487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.346 [2024-12-13 23:14:30.478836] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:51.346 [2024-12-13 23:14:30.478893] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:51.608 [2024-12-13 23:14:30.635059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.608 [2024-12-13 23:14:30.635098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:51.608 [2024-12-13 23:14:30.635110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:51.608 [2024-12-13 23:14:30.635117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.608 [2024-12-13 23:14:30.635154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.608 [2024-12-13 23:14:30.635163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:51.608 [2024-12-13 23:14:30.635171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:33:51.608 [2024-12-13 23:14:30.635177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.608 [2024-12-13 23:14:30.635190] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:51.608 [2024-12-13 23:14:30.635727] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:51.608 [2024-12-13 23:14:30.635741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.608 [2024-12-13 23:14:30.635748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:51.608 [2024-12-13 23:14:30.635768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:33:51.608 [2024-12-13 23:14:30.635775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.608 [2024-12-13 23:14:30.636038] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:51.608 [2024-12-13 23:14:30.636062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.608 [2024-12-13 23:14:30.636072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:51.608 [2024-12-13 23:14:30.636079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:51.608 [2024-12-13 23:14:30.636085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.608 [2024-12-13 23:14:30.636129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.608 [2024-12-13 23:14:30.636137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:51.608 [2024-12-13 23:14:30.636144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:51.608 [2024-12-13 23:14:30.636149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.608 [2024-12-13 23:14:30.636568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.608 [2024-12-13 23:14:30.636587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:51.608 [2024-12-13 23:14:30.636596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:33:51.608 [2024-12-13 23:14:30.636603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.608 [2024-12-13 23:14:30.636667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.608 [2024-12-13 23:14:30.636676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:51.608 [2024-12-13 23:14:30.636683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:33:51.608 [2024-12-13 23:14:30.636688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.608 [2024-12-13 23:14:30.636705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.608 [2024-12-13 23:14:30.636712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:51.608 [2024-12-13 23:14:30.636720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:51.608 [2024-12-13 23:14:30.636726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.609 [2024-12-13 23:14:30.636741] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:51.609 [2024-12-13 23:14:30.639969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.609 [2024-12-13 23:14:30.639994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:51.609 [2024-12-13 23:14:30.640002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:33:51.609 [2024-12-13 23:14:30.640008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.609 [2024-12-13 23:14:30.640039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.609 [2024-12-13 23:14:30.640046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:51.609 [2024-12-13 23:14:30.640052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:51.609 [2024-12-13 23:14:30.640058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.609 [2024-12-13 23:14:30.640094] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:51.609 [2024-12-13 23:14:30.640113] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:51.609 [2024-12-13 23:14:30.640143] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:51.609 [2024-12-13 23:14:30.640161] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:51.609 [2024-12-13 23:14:30.640249] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:51.609 [2024-12-13 23:14:30.640259] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:51.609 [2024-12-13 23:14:30.640269] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:51.609 [2024-12-13 23:14:30.640278] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:51.609 [2024-12-13 23:14:30.640285] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:51.609 [2024-12-13 23:14:30.640294] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:51.609 [2024-12-13 23:14:30.640301] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:51.609 [2024-12-13 23:14:30.640306] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:51.609 [2024-12-13 23:14:30.640311] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:51.609 [2024-12-13 23:14:30.640317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.609 [2024-12-13 23:14:30.640323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:51.609 [2024-12-13 23:14:30.640329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:33:51.609 [2024-12-13 23:14:30.640334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.609 [2024-12-13 23:14:30.640398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.609 [2024-12-13 23:14:30.640404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:51.609 [2024-12-13 23:14:30.640411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:51.609 [2024-12-13 23:14:30.640419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.609 [2024-12-13 23:14:30.640491] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:51.609 [2024-12-13 23:14:30.640499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:51.609 [2024-12-13 23:14:30.640507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:51.609 [2024-12-13 23:14:30.640513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:51.609 [2024-12-13 23:14:30.640524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:51.609 [2024-12-13 23:14:30.640535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:51.609 [2024-12-13 23:14:30.640542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:51.609 [2024-12-13 23:14:30.640553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:51.609 [2024-12-13 23:14:30.640558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:51.609 [2024-12-13 23:14:30.640563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:51.609 [2024-12-13 23:14:30.640569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:51.609 [2024-12-13 23:14:30.640575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:51.609 [2024-12-13 23:14:30.640584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:51.609 [2024-12-13 23:14:30.640594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:51.609 [2024-12-13 23:14:30.640599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:51.609 [2024-12-13 23:14:30.640610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:51.609 [2024-12-13 23:14:30.640621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:51.609 [2024-12-13 23:14:30.640626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:51.609 [2024-12-13 23:14:30.640637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:51.609 [2024-12-13 23:14:30.640641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:51.609 [2024-12-13 23:14:30.640651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:51.609 [2024-12-13 23:14:30.640656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:51.609 [2024-12-13 23:14:30.640666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:51.609 [2024-12-13 23:14:30.640671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:51.609 [2024-12-13 23:14:30.640681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:51.609 [2024-12-13 23:14:30.640685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:51.609 [2024-12-13 23:14:30.640690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:51.609 [2024-12-13 23:14:30.640695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:51.609 [2024-12-13 23:14:30.640700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:51.609 [2024-12-13 23:14:30.640705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:51.609 [2024-12-13 23:14:30.640718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:51.609 [2024-12-13 23:14:30.640723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640727] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:51.609 [2024-12-13 23:14:30.640733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:51.609 [2024-12-13 23:14:30.640739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:51.609 [2024-12-13 23:14:30.640744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:51.609 [2024-12-13 23:14:30.640752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:51.609 [2024-12-13 23:14:30.640770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:51.609 [2024-12-13 23:14:30.640775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:51.609 [2024-12-13 23:14:30.640780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:51.609 [2024-12-13 23:14:30.640785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:51.609 [2024-12-13 23:14:30.640790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:51.609 [2024-12-13 23:14:30.640797] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:51.609 [2024-12-13 23:14:30.640808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:51.609 [2024-12-13 23:14:30.640815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:51.609 [2024-12-13 23:14:30.640821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:51.609 [2024-12-13 23:14:30.640826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:51.609 [2024-12-13 23:14:30.640832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:51.609 [2024-12-13 23:14:30.640838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:51.609 [2024-12-13 23:14:30.640844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:51.609 [2024-12-13 23:14:30.640849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:51.609 [2024-12-13 23:14:30.640854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:51.609 [2024-12-13 23:14:30.640860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:51.609 [2024-12-13 23:14:30.640865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:51.609 [2024-12-13 23:14:30.640871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:51.609 [2024-12-13 23:14:30.640876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:51.609 [2024-12-13 23:14:30.640881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:51.609 [2024-12-13 23:14:30.640887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:51.609 [2024-12-13 23:14:30.640892] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:51.609 [2024-12-13 23:14:30.640899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:51.610 [2024-12-13 23:14:30.640905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:51.610 [2024-12-13 23:14:30.640912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:51.610 [2024-12-13 23:14:30.640918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:51.610 [2024-12-13 23:14:30.640924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:51.610 [2024-12-13 23:14:30.640929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.640935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:51.610 [2024-12-13 23:14:30.640941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:33:51.610 [2024-12-13 23:14:30.640946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.662067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.662092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:51.610 [2024-12-13 23:14:30.662101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.089 ms 00:33:51.610 [2024-12-13 23:14:30.662107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.662173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.662180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:51.610 [2024-12-13 23:14:30.662189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:33:51.610 [2024-12-13 23:14:30.662194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.705418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.705451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:51.610 [2024-12-13 23:14:30.705461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.183 ms 00:33:51.610 [2024-12-13 23:14:30.705467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.705503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.705510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:51.610 [2024-12-13 23:14:30.705517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:51.610 [2024-12-13 23:14:30.705523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.705598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.705607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:51.610 [2024-12-13 23:14:30.705614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:51.610 [2024-12-13 23:14:30.705620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.705717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.705726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:51.610 [2024-12-13 23:14:30.705733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:33:51.610 [2024-12-13 23:14:30.705739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.717680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.717707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:51.610 [2024-12-13 23:14:30.717716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.928 ms 00:33:51.610 [2024-12-13 23:14:30.717722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.717827] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:51.610 [2024-12-13 23:14:30.717838] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:51.610 [2024-12-13 23:14:30.717846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.717854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:51.610 [2024-12-13 23:14:30.717860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:51.610 [2024-12-13 23:14:30.717867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.726995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.727018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:51.610 [2024-12-13 23:14:30.727027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.115 ms 00:33:51.610 [2024-12-13 23:14:30.727033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.727127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.727133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:51.610 [2024-12-13 23:14:30.727141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:33:51.610 [2024-12-13 23:14:30.727150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.727190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.727200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:51.610 [2024-12-13 23:14:30.727212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:51.610 [2024-12-13 23:14:30.727218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.727672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.727683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:51.610 [2024-12-13 23:14:30.727690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:33:51.610 [2024-12-13 23:14:30.727696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.727712] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:51.610 [2024-12-13 23:14:30.727720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.727727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:51.610 [2024-12-13 23:14:30.727733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:51.610 [2024-12-13 23:14:30.727739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.737263] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:51.610 [2024-12-13 23:14:30.737371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.737380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:51.610 [2024-12-13 23:14:30.737387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.618 ms 00:33:51.610 [2024-12-13 23:14:30.737393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.739003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.739025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:51.610 [2024-12-13 23:14:30.739032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:33:51.610 [2024-12-13 23:14:30.739038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.739092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.739100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:51.610 [2024-12-13 23:14:30.739107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:51.610 [2024-12-13 23:14:30.739113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.739139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.739149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:51.610 [2024-12-13 23:14:30.739156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:51.610 [2024-12-13 23:14:30.739162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.610 [2024-12-13 23:14:30.739187] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:51.610 [2024-12-13 23:14:30.739196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.610 [2024-12-13 23:14:30.739202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:51.610 [2024-12-13 23:14:30.739209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:51.610 [2024-12-13 23:14:30.739215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.872 [2024-12-13 23:14:30.759089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.872 [2024-12-13 23:14:30.759116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:51.872 [2024-12-13 23:14:30.759125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.857 ms 00:33:51.872 [2024-12-13 23:14:30.759132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.872 [2024-12-13 23:14:30.759188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:51.872 [2024-12-13 23:14:30.759196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:51.872 [2024-12-13 23:14:30.759203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:51.872 [2024-12-13 23:14:30.759209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:51.872 [2024-12-13 23:14:30.760310] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 124.897 ms, result 0 00:33:52.816  [2024-12-13T23:14:33.342Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-13T23:14:33.915Z] Copying: 23/1024 [MB] (11 MBps) [2024-12-13T23:14:35.305Z] Copying: 34/1024 [MB] (11 MBps) [2024-12-13T23:14:36.248Z] Copying: 46/1024 [MB] (11 MBps) [2024-12-13T23:14:37.191Z] Copying: 58/1024 [MB] (11 MBps) [2024-12-13T23:14:38.136Z] Copying: 70/1024 [MB] (11 MBps) [2024-12-13T23:14:39.078Z] Copying: 81/1024 [MB] (11 MBps) [2024-12-13T23:14:40.022Z] Copying: 93/1024 [MB] (11 MBps) [2024-12-13T23:14:40.966Z] Copying: 105/1024 [MB] (11 MBps) [2024-12-13T23:14:41.911Z] Copying: 116/1024 [MB] (11 MBps) [2024-12-13T23:14:42.914Z] Copying: 128/1024 [MB] (11 MBps) [2024-12-13T23:14:44.303Z] Copying: 140/1024 [MB] (11 MBps) [2024-12-13T23:14:45.251Z] Copying: 151/1024 [MB] (11 MBps) [2024-12-13T23:14:46.193Z] Copying: 163/1024 [MB] (11 MBps) [2024-12-13T23:14:47.138Z] Copying: 173/1024 [MB] (10 MBps) [2024-12-13T23:14:48.082Z] Copying: 185/1024 [MB] (11 MBps) [2024-12-13T23:14:49.025Z] Copying: 196/1024 [MB] (11 MBps) [2024-12-13T23:14:49.969Z] Copying: 207/1024 [MB] (10 MBps) [2024-12-13T23:14:50.914Z] Copying: 219/1024 [MB] (11 MBps) [2024-12-13T23:14:52.301Z] Copying: 230/1024 [MB] (11 MBps) [2024-12-13T23:14:53.246Z] Copying: 241/1024 [MB] (10 MBps) [2024-12-13T23:14:54.191Z] Copying: 252/1024 [MB] (11 MBps) [2024-12-13T23:14:55.134Z] Copying: 264/1024 [MB] (11 MBps) [2024-12-13T23:14:56.076Z] Copying: 275/1024 [MB] (11 MBps) [2024-12-13T23:14:57.021Z] Copying: 287/1024 [MB] (11 MBps) [2024-12-13T23:14:57.965Z] Copying: 299/1024 [MB] (11 MBps) [2024-12-13T23:14:58.909Z] Copying: 309/1024 [MB] (10 MBps) [2024-12-13T23:15:00.297Z] Copying: 321/1024 [MB] (11 MBps) [2024-12-13T23:15:01.241Z] Copying: 332/1024 [MB] (11 MBps) [2024-12-13T23:15:02.183Z] Copying: 344/1024 [MB] (11 MBps) [2024-12-13T23:15:03.128Z] Copying: 355/1024 [MB] (11 MBps) [2024-12-13T23:15:04.072Z] Copying: 366/1024 [MB] (10 MBps) [2024-12-13T23:15:05.015Z] Copying: 376/1024 [MB] (10 MBps) [2024-12-13T23:15:05.999Z] Copying: 387/1024 [MB] (10 MBps) [2024-12-13T23:15:06.951Z] Copying: 399/1024 [MB] (11 MBps) [2024-12-13T23:15:08.337Z] Copying: 410/1024 [MB] (11 MBps) [2024-12-13T23:15:08.909Z] Copying: 421/1024 [MB] (11 MBps) [2024-12-13T23:15:10.293Z] Copying: 433/1024 [MB] (11 MBps) [2024-12-13T23:15:11.235Z] Copying: 444/1024 [MB] (11 MBps) [2024-12-13T23:15:12.178Z] Copying: 455/1024 [MB] (10 MBps) [2024-12-13T23:15:13.121Z] Copying: 466/1024 [MB] (10 MBps) [2024-12-13T23:15:14.066Z] Copying: 477/1024 [MB] (11 MBps) [2024-12-13T23:15:15.012Z] Copying: 489/1024 [MB] (11 MBps) [2024-12-13T23:15:15.960Z] Copying: 500/1024 [MB] (11 MBps) [2024-12-13T23:15:17.347Z] Copying: 512/1024 [MB] (11 MBps) [2024-12-13T23:15:17.919Z] Copying: 526/1024 [MB] (14 MBps) [2024-12-13T23:15:19.308Z] Copying: 538/1024 [MB] (11 MBps) [2024-12-13T23:15:20.254Z] Copying: 549/1024 [MB] (11 MBps) [2024-12-13T23:15:21.198Z] Copying: 560/1024 [MB] (10 MBps) [2024-12-13T23:15:22.142Z] Copying: 571/1024 [MB] (11 MBps) [2024-12-13T23:15:23.087Z] Copying: 583/1024 [MB] (11 MBps) [2024-12-13T23:15:24.033Z] Copying: 594/1024 [MB] (11 MBps) [2024-12-13T23:15:24.979Z] Copying: 606/1024 [MB] (11 MBps) [2024-12-13T23:15:25.924Z] Copying: 616/1024 [MB] (10 MBps) [2024-12-13T23:15:27.311Z] Copying: 628/1024 [MB] (11 MBps) [2024-12-13T23:15:28.253Z] Copying: 639/1024 [MB] (10 MBps) [2024-12-13T23:15:29.230Z] Copying: 650/1024 [MB] (11 MBps) [2024-12-13T23:15:30.178Z] Copying: 661/1024 [MB] (11 MBps) [2024-12-13T23:15:31.123Z] Copying: 672/1024 [MB] (11 MBps) [2024-12-13T23:15:32.068Z] Copying: 684/1024 [MB] (11 MBps) [2024-12-13T23:15:33.008Z] Copying: 695/1024 [MB] (10 MBps) [2024-12-13T23:15:33.952Z] Copying: 706/1024 [MB] (11 MBps) [2024-12-13T23:15:35.341Z] Copying: 720/1024 [MB] (13 MBps) [2024-12-13T23:15:35.913Z] Copying: 737/1024 [MB] (16 MBps) [2024-12-13T23:15:37.296Z] Copying: 749/1024 [MB] (11 MBps) [2024-12-13T23:15:38.237Z] Copying: 767/1024 [MB] (18 MBps) [2024-12-13T23:15:39.182Z] Copying: 789/1024 [MB] (21 MBps) [2024-12-13T23:15:40.126Z] Copying: 811/1024 [MB] (21 MBps) [2024-12-13T23:15:41.067Z] Copying: 823/1024 [MB] (11 MBps) [2024-12-13T23:15:42.009Z] Copying: 842/1024 [MB] (19 MBps) [2024-12-13T23:15:42.953Z] Copying: 859/1024 [MB] (17 MBps) [2024-12-13T23:15:44.336Z] Copying: 882/1024 [MB] (22 MBps) [2024-12-13T23:15:44.910Z] Copying: 907/1024 [MB] (25 MBps) [2024-12-13T23:15:46.294Z] Copying: 925/1024 [MB] (17 MBps) [2024-12-13T23:15:47.238Z] Copying: 940/1024 [MB] (15 MBps) [2024-12-13T23:15:48.183Z] Copying: 957/1024 [MB] (16 MBps) [2024-12-13T23:15:49.129Z] Copying: 972/1024 [MB] (14 MBps) [2024-12-13T23:15:50.074Z] Copying: 991/1024 [MB] (19 MBps) [2024-12-13T23:15:50.648Z] Copying: 1013/1024 [MB] (21 MBps) [2024-12-13T23:15:50.648Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-13 23:15:50.607507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.508 [2024-12-13 23:15:50.607598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:11.508 [2024-12-13 23:15:50.607616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:11.508 [2024-12-13 23:15:50.607626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.508 [2024-12-13 23:15:50.607656] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:11.508 [2024-12-13 23:15:50.610788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.508 [2024-12-13 23:15:50.610827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:11.508 [2024-12-13 23:15:50.610840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:35:11.508 [2024-12-13 23:15:50.610851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.508 [2024-12-13 23:15:50.611104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.509 [2024-12-13 23:15:50.611116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:11.509 [2024-12-13 23:15:50.611126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:35:11.509 [2024-12-13 23:15:50.611135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.509 [2024-12-13 23:15:50.611173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.509 [2024-12-13 23:15:50.611184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:11.509 [2024-12-13 23:15:50.611192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:11.509 [2024-12-13 23:15:50.611201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.509 [2024-12-13 23:15:50.611268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.509 [2024-12-13 23:15:50.611279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:11.509 [2024-12-13 23:15:50.611288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:35:11.509 [2024-12-13 23:15:50.611296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.509 [2024-12-13 23:15:50.611310] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:11.509 [2024-12-13 23:15:50.611324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.611995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.612002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.612010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.612017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.612025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:11.509 [2024-12-13 23:15:50.612033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:11.510 [2024-12-13 23:15:50.612813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:11.510 [2024-12-13 23:15:50.612826] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2 00:35:11.510 [2024-12-13 23:15:50.612835] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:11.510 [2024-12-13 23:15:50.612843] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:35:11.510 [2024-12-13 23:15:50.612851] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:11.510 [2024-12-13 23:15:50.612861] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:11.510 [2024-12-13 23:15:50.612868] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:11.510 [2024-12-13 23:15:50.612877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:11.510 [2024-12-13 23:15:50.612886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:11.510 [2024-12-13 23:15:50.612893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:11.510 [2024-12-13 23:15:50.612899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:11.510 [2024-12-13 23:15:50.612907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.510 [2024-12-13 23:15:50.612916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:11.510 [2024-12-13 23:15:50.612925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:35:11.510 [2024-12-13 23:15:50.612936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.510 [2024-12-13 23:15:50.627902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.510 [2024-12-13 23:15:50.627957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:11.510 [2024-12-13 23:15:50.627972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.943 ms 00:35:11.510 [2024-12-13 23:15:50.627981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.510 [2024-12-13 23:15:50.628382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.510 [2024-12-13 23:15:50.628406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:11.510 [2024-12-13 23:15:50.628425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:35:11.510 [2024-12-13 23:15:50.628433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.665504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.665564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:11.772 [2024-12-13 23:15:50.665576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.665584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.665660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.665669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:11.772 [2024-12-13 23:15:50.665686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.665694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.665779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.665792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:11.772 [2024-12-13 23:15:50.665801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.665809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.665826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.665834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:11.772 [2024-12-13 23:15:50.665842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.665853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.752262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.752324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:11.772 [2024-12-13 23:15:50.752338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.752346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.823160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.823226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:11.772 [2024-12-13 23:15:50.823241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.823256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.823340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.823351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:11.772 [2024-12-13 23:15:50.823360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.823369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.823414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.823423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:11.772 [2024-12-13 23:15:50.823583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.823594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.823692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.823702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:11.772 [2024-12-13 23:15:50.823712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.823719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.823748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.823784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:11.772 [2024-12-13 23:15:50.823794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.823803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.823847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.823859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:11.772 [2024-12-13 23:15:50.823867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.823875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.823921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:11.772 [2024-12-13 23:15:50.823932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:11.772 [2024-12-13 23:15:50.823941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:11.772 [2024-12-13 23:15:50.823950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.772 [2024-12-13 23:15:50.824090] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 216.560 ms, result 0 00:35:12.755 00:35:12.756 00:35:12.756 23:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:14.700 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:14.700 23:15:53 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:35:14.700 [2024-12-13 23:15:53.736448] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:14.700 [2024-12-13 23:15:53.736537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88352 ] 00:35:14.962 [2024-12-13 23:15:53.890100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.962 [2024-12-13 23:15:54.003147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.223 [2024-12-13 23:15:54.298098] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:15.223 [2024-12-13 23:15:54.298183] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:15.486 [2024-12-13 23:15:54.467969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.486 [2024-12-13 23:15:54.468015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:15.486 [2024-12-13 23:15:54.468028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:15.486 [2024-12-13 23:15:54.468037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.486 [2024-12-13 23:15:54.468088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.486 [2024-12-13 23:15:54.468101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:15.486 [2024-12-13 23:15:54.468111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:15.486 [2024-12-13 23:15:54.468118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.486 [2024-12-13 23:15:54.468136] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:15.486 [2024-12-13 23:15:54.468807] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:15.486 [2024-12-13 23:15:54.468833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.486 [2024-12-13 23:15:54.468841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:15.486 [2024-12-13 23:15:54.468850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:35:15.486 [2024-12-13 23:15:54.468857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.486 [2024-12-13 23:15:54.469147] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:15.486 [2024-12-13 23:15:54.469171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.486 [2024-12-13 23:15:54.469183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:15.486 [2024-12-13 23:15:54.469192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:35:15.487 [2024-12-13 23:15:54.469200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.487 [2024-12-13 23:15:54.469246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.487 [2024-12-13 23:15:54.469256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:15.487 [2024-12-13 23:15:54.469264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:35:15.487 [2024-12-13 23:15:54.469271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.487 [2024-12-13 23:15:54.469536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.487 [2024-12-13 23:15:54.469547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:15.487 [2024-12-13 23:15:54.469555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:35:15.487 [2024-12-13 23:15:54.469564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.487 [2024-12-13 23:15:54.469628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.487 [2024-12-13 23:15:54.469638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:15.487 [2024-12-13 23:15:54.469646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:35:15.487 [2024-12-13 23:15:54.469654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.487 [2024-12-13 23:15:54.469676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.487 [2024-12-13 23:15:54.469684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:15.487 [2024-12-13 23:15:54.469696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:15.487 [2024-12-13 23:15:54.469703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.487 [2024-12-13 23:15:54.469721] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:15.487 [2024-12-13 23:15:54.473753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.487 [2024-12-13 23:15:54.473790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:15.487 [2024-12-13 23:15:54.473800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.036 ms 00:35:15.487 [2024-12-13 23:15:54.473807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.487 [2024-12-13 23:15:54.473847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.487 [2024-12-13 23:15:54.473855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:15.487 [2024-12-13 23:15:54.473864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:15.487 [2024-12-13 23:15:54.473871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.487 [2024-12-13 23:15:54.473912] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:15.487 [2024-12-13 23:15:54.473935] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:15.487 [2024-12-13 23:15:54.473973] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:15.487 [2024-12-13 23:15:54.473989] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:15.487 [2024-12-13 23:15:54.474094] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:15.487 [2024-12-13 23:15:54.474105] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:15.487 [2024-12-13 23:15:54.474115] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:15.487 [2024-12-13 23:15:54.474125] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:15.487 [2024-12-13 23:15:54.474136] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:15.487 [2024-12-13 23:15:54.474146] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:15.487 [2024-12-13 23:15:54.474154] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:15.487 [2024-12-13 23:15:54.474162] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:15.487 [2024-12-13 23:15:54.474169] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:15.487 [2024-12-13 23:15:54.474178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.487 [2024-12-13 23:15:54.474185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:15.487 [2024-12-13 23:15:54.474194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:35:15.487 [2024-12-13 23:15:54.474201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.487 [2024-12-13 23:15:54.474283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.487 [2024-12-13 23:15:54.474300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:15.487 [2024-12-13 23:15:54.474308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:15.487 [2024-12-13 23:15:54.474318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.487 [2024-12-13 23:15:54.474421] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:15.487 [2024-12-13 23:15:54.474432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:15.487 [2024-12-13 23:15:54.474440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:15.487 [2024-12-13 23:15:54.474448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:15.487 [2024-12-13 23:15:54.474464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:15.487 [2024-12-13 23:15:54.474480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:15.487 [2024-12-13 23:15:54.474488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:15.487 [2024-12-13 23:15:54.474502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:15.487 [2024-12-13 23:15:54.474509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:15.487 [2024-12-13 23:15:54.474516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:15.487 [2024-12-13 23:15:54.474523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:15.487 [2024-12-13 23:15:54.474530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:15.487 [2024-12-13 23:15:54.474543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:15.487 [2024-12-13 23:15:54.474558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:15.487 [2024-12-13 23:15:54.474565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:15.487 [2024-12-13 23:15:54.474579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:15.487 [2024-12-13 23:15:54.474592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:15.487 [2024-12-13 23:15:54.474599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:15.487 [2024-12-13 23:15:54.474612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:15.487 [2024-12-13 23:15:54.474619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:15.487 [2024-12-13 23:15:54.474632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:15.487 [2024-12-13 23:15:54.474638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:15.487 [2024-12-13 23:15:54.474651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:15.487 [2024-12-13 23:15:54.474658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:15.487 [2024-12-13 23:15:54.474672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:15.487 [2024-12-13 23:15:54.474678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:15.487 [2024-12-13 23:15:54.474685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:15.487 [2024-12-13 23:15:54.474692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:15.487 [2024-12-13 23:15:54.474701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:15.487 [2024-12-13 23:15:54.474708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:15.487 [2024-12-13 23:15:54.474721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:15.487 [2024-12-13 23:15:54.474728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474734] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:15.487 [2024-12-13 23:15:54.474743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:15.487 [2024-12-13 23:15:54.474750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:15.487 [2024-12-13 23:15:54.474771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:15.487 [2024-12-13 23:15:54.474781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:15.487 [2024-12-13 23:15:54.474788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:15.487 [2024-12-13 23:15:54.474795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:15.487 [2024-12-13 23:15:54.474802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:15.487 [2024-12-13 23:15:54.474809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:15.487 [2024-12-13 23:15:54.474816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:15.487 [2024-12-13 23:15:54.474824] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:15.487 [2024-12-13 23:15:54.474833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:15.487 [2024-12-13 23:15:54.474842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:15.487 [2024-12-13 23:15:54.474850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:15.487 [2024-12-13 23:15:54.474858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:15.488 [2024-12-13 23:15:54.474865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:15.488 [2024-12-13 23:15:54.474872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:15.488 [2024-12-13 23:15:54.474879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:15.488 [2024-12-13 23:15:54.474886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:15.488 [2024-12-13 23:15:54.474894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:15.488 [2024-12-13 23:15:54.474901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:15.488 [2024-12-13 23:15:54.474909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:15.488 [2024-12-13 23:15:54.474917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:15.488 [2024-12-13 23:15:54.474924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:15.488 [2024-12-13 23:15:54.474931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:15.488 [2024-12-13 23:15:54.474938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:15.488 [2024-12-13 23:15:54.474945] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:15.488 [2024-12-13 23:15:54.474955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:15.488 [2024-12-13 23:15:54.474964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:15.488 [2024-12-13 23:15:54.474972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:15.488 [2024-12-13 23:15:54.474980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:15.488 [2024-12-13 23:15:54.474988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:15.488 [2024-12-13 23:15:54.474996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.475004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:15.488 [2024-12-13 23:15:54.475011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:35:15.488 [2024-12-13 23:15:54.475018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.501165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.501195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:15.488 [2024-12-13 23:15:54.501206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.107 ms 00:35:15.488 [2024-12-13 23:15:54.501214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.501296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.501304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:15.488 [2024-12-13 23:15:54.501315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:35:15.488 [2024-12-13 23:15:54.501323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.545326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.545364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:15.488 [2024-12-13 23:15:54.545375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.956 ms 00:35:15.488 [2024-12-13 23:15:54.545384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.545429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.545439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:15.488 [2024-12-13 23:15:54.545448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:15.488 [2024-12-13 23:15:54.545456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.545555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.545566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:15.488 [2024-12-13 23:15:54.545575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:35:15.488 [2024-12-13 23:15:54.545583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.545703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.545715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:15.488 [2024-12-13 23:15:54.545724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:35:15.488 [2024-12-13 23:15:54.545732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.560587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.560617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:15.488 [2024-12-13 23:15:54.560627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.837 ms 00:35:15.488 [2024-12-13 23:15:54.560634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.560787] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:15.488 [2024-12-13 23:15:54.560802] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:15.488 [2024-12-13 23:15:54.560812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.560823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:15.488 [2024-12-13 23:15:54.560832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:35:15.488 [2024-12-13 23:15:54.560840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.573106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.573133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:15.488 [2024-12-13 23:15:54.573144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.251 ms 00:35:15.488 [2024-12-13 23:15:54.573152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.573272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.573281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:15.488 [2024-12-13 23:15:54.573290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:35:15.488 [2024-12-13 23:15:54.573301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.573346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.573356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:15.488 [2024-12-13 23:15:54.573371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:35:15.488 [2024-12-13 23:15:54.573378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.573971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.573990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:15.488 [2024-12-13 23:15:54.573999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:35:15.488 [2024-12-13 23:15:54.574006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.574027] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:15.488 [2024-12-13 23:15:54.574038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.574046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:15.488 [2024-12-13 23:15:54.574054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:15.488 [2024-12-13 23:15:54.574061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.586273] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:15.488 [2024-12-13 23:15:54.586404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.586414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:15.488 [2024-12-13 23:15:54.586424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.326 ms 00:35:15.488 [2024-12-13 23:15:54.586432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.588617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.588654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:15.488 [2024-12-13 23:15:54.588663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.166 ms 00:35:15.488 [2024-12-13 23:15:54.588671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.588752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.588774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:15.488 [2024-12-13 23:15:54.588784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:15.488 [2024-12-13 23:15:54.588792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.588814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.588827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:15.488 [2024-12-13 23:15:54.588836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:15.488 [2024-12-13 23:15:54.588844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.588874] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:15.488 [2024-12-13 23:15:54.588884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.588892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:15.488 [2024-12-13 23:15:54.588900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:15.488 [2024-12-13 23:15:54.588907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.613931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.613962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:15.488 [2024-12-13 23:15:54.613973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.003 ms 00:35:15.488 [2024-12-13 23:15:54.613981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.488 [2024-12-13 23:15:54.614050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:15.488 [2024-12-13 23:15:54.614060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:15.488 [2024-12-13 23:15:54.614069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:15.488 [2024-12-13 23:15:54.614077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.489 [2024-12-13 23:15:54.615075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 146.668 ms, result 0 00:35:16.879  [2024-12-13T23:15:56.964Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-13T23:15:57.909Z] Copying: 33/1024 [MB] (16 MBps) [2024-12-13T23:15:58.855Z] Copying: 46/1024 [MB] (12 MBps) [2024-12-13T23:15:59.794Z] Copying: 66/1024 [MB] (20 MBps) [2024-12-13T23:16:00.737Z] Copying: 89/1024 [MB] (22 MBps) [2024-12-13T23:16:01.678Z] Copying: 114/1024 [MB] (24 MBps) [2024-12-13T23:16:03.067Z] Copying: 134/1024 [MB] (20 MBps) [2024-12-13T23:16:03.640Z] Copying: 151/1024 [MB] (16 MBps) [2024-12-13T23:16:05.030Z] Copying: 168/1024 [MB] (17 MBps) [2024-12-13T23:16:05.991Z] Copying: 191/1024 [MB] (22 MBps) [2024-12-13T23:16:06.936Z] Copying: 214/1024 [MB] (22 MBps) [2024-12-13T23:16:07.882Z] Copying: 236/1024 [MB] (22 MBps) [2024-12-13T23:16:08.826Z] Copying: 255/1024 [MB] (19 MBps) [2024-12-13T23:16:09.772Z] Copying: 273/1024 [MB] (18 MBps) [2024-12-13T23:16:10.715Z] Copying: 295/1024 [MB] (21 MBps) [2024-12-13T23:16:11.660Z] Copying: 313/1024 [MB] (18 MBps) [2024-12-13T23:16:13.050Z] Copying: 330/1024 [MB] (16 MBps) [2024-12-13T23:16:13.993Z] Copying: 344/1024 [MB] (14 MBps) [2024-12-13T23:16:14.936Z] Copying: 354/1024 [MB] (10 MBps) [2024-12-13T23:16:15.942Z] Copying: 364/1024 [MB] (10 MBps) [2024-12-13T23:16:16.881Z] Copying: 383716/1048576 [kB] (10104 kBps) [2024-12-13T23:16:17.824Z] Copying: 400/1024 [MB] (25 MBps) [2024-12-13T23:16:18.768Z] Copying: 430/1024 [MB] (30 MBps) [2024-12-13T23:16:19.710Z] Copying: 441/1024 [MB] (10 MBps) [2024-12-13T23:16:20.652Z] Copying: 451/1024 [MB] (10 MBps) [2024-12-13T23:16:22.040Z] Copying: 461/1024 [MB] (10 MBps) [2024-12-13T23:16:22.986Z] Copying: 482488/1048576 [kB] (9960 kBps) [2024-12-13T23:16:23.931Z] Copying: 481/1024 [MB] (10 MBps) [2024-12-13T23:16:24.875Z] Copying: 492/1024 [MB] (11 MBps) [2024-12-13T23:16:25.819Z] Copying: 503/1024 [MB] (11 MBps) [2024-12-13T23:16:26.764Z] Copying: 515/1024 [MB] (11 MBps) [2024-12-13T23:16:27.708Z] Copying: 526/1024 [MB] (11 MBps) [2024-12-13T23:16:28.651Z] Copying: 541/1024 [MB] (14 MBps) [2024-12-13T23:16:30.040Z] Copying: 562/1024 [MB] (20 MBps) [2024-12-13T23:16:30.981Z] Copying: 588/1024 [MB] (25 MBps) [2024-12-13T23:16:31.924Z] Copying: 602/1024 [MB] (14 MBps) [2024-12-13T23:16:32.878Z] Copying: 621/1024 [MB] (18 MBps) [2024-12-13T23:16:33.823Z] Copying: 632/1024 [MB] (11 MBps) [2024-12-13T23:16:34.767Z] Copying: 643/1024 [MB] (11 MBps) [2024-12-13T23:16:35.709Z] Copying: 654/1024 [MB] (11 MBps) [2024-12-13T23:16:36.654Z] Copying: 665/1024 [MB] (11 MBps) [2024-12-13T23:16:38.043Z] Copying: 676/1024 [MB] (11 MBps) [2024-12-13T23:16:38.652Z] Copying: 687/1024 [MB] (10 MBps) [2024-12-13T23:16:40.065Z] Copying: 699/1024 [MB] (11 MBps) [2024-12-13T23:16:40.638Z] Copying: 709/1024 [MB] (10 MBps) [2024-12-13T23:16:42.028Z] Copying: 720/1024 [MB] (11 MBps) [2024-12-13T23:16:42.971Z] Copying: 731/1024 [MB] (10 MBps) [2024-12-13T23:16:43.916Z] Copying: 742/1024 [MB] (11 MBps) [2024-12-13T23:16:44.861Z] Copying: 754/1024 [MB] (11 MBps) [2024-12-13T23:16:45.806Z] Copying: 765/1024 [MB] (11 MBps) [2024-12-13T23:16:46.752Z] Copying: 776/1024 [MB] (11 MBps) [2024-12-13T23:16:47.698Z] Copying: 787/1024 [MB] (11 MBps) [2024-12-13T23:16:48.642Z] Copying: 797/1024 [MB] (10 MBps) [2024-12-13T23:16:50.026Z] Copying: 809/1024 [MB] (11 MBps) [2024-12-13T23:16:50.969Z] Copying: 820/1024 [MB] (11 MBps) [2024-12-13T23:16:51.913Z] Copying: 831/1024 [MB] (11 MBps) [2024-12-13T23:16:52.859Z] Copying: 842/1024 [MB] (11 MBps) [2024-12-13T23:16:53.804Z] Copying: 853/1024 [MB] (10 MBps) [2024-12-13T23:16:54.749Z] Copying: 863/1024 [MB] (10 MBps) [2024-12-13T23:16:55.692Z] Copying: 873/1024 [MB] (10 MBps) [2024-12-13T23:16:56.637Z] Copying: 883/1024 [MB] (10 MBps) [2024-12-13T23:16:58.018Z] Copying: 897/1024 [MB] (13 MBps) [2024-12-13T23:16:58.963Z] Copying: 930/1024 [MB] (33 MBps) [2024-12-13T23:16:59.909Z] Copying: 947/1024 [MB] (16 MBps) [2024-12-13T23:17:00.855Z] Copying: 959/1024 [MB] (12 MBps) [2024-12-13T23:17:01.837Z] Copying: 976/1024 [MB] (16 MBps) [2024-12-13T23:17:02.806Z] Copying: 995/1024 [MB] (18 MBps) [2024-12-13T23:17:03.741Z] Copying: 1016/1024 [MB] (21 MBps) [2024-12-13T23:17:04.003Z] Copying: 1048372/1048576 [kB] (7508 kBps) [2024-12-13T23:17:04.003Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-13 23:17:03.812084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:24.863 [2024-12-13 23:17:03.812136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:24.863 [2024-12-13 23:17:03.812148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:24.863 [2024-12-13 23:17:03.812155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.863 [2024-12-13 23:17:03.813170] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:24.863 [2024-12-13 23:17:03.816963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:24.863 [2024-12-13 23:17:03.816993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:24.863 [2024-12-13 23:17:03.817002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.768 ms 00:36:24.863 [2024-12-13 23:17:03.817009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.863 [2024-12-13 23:17:03.824295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:24.863 [2024-12-13 23:17:03.824326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:24.863 [2024-12-13 23:17:03.824333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.460 ms 00:36:24.863 [2024-12-13 23:17:03.824339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.863 [2024-12-13 23:17:03.824360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:24.863 [2024-12-13 23:17:03.824367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:24.863 [2024-12-13 23:17:03.824374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:24.863 [2024-12-13 23:17:03.824380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.863 [2024-12-13 23:17:03.824422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:24.863 [2024-12-13 23:17:03.824430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:24.863 [2024-12-13 23:17:03.824437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:36:24.863 [2024-12-13 23:17:03.824443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.863 [2024-12-13 23:17:03.824453] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:24.863 [2024-12-13 23:17:03.824463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128512 / 261120 wr_cnt: 1 state: open 00:36:24.863 [2024-12-13 23:17:03.824471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:24.863 [2024-12-13 23:17:03.824664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.824999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.825005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.825010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.825016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.825022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.825028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.825033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.825039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.825045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.825050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.825056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:24.864 [2024-12-13 23:17:03.825068] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:24.864 [2024-12-13 23:17:03.825074] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2 00:36:24.864 [2024-12-13 23:17:03.825080] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128512 00:36:24.864 [2024-12-13 23:17:03.825086] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128544 00:36:24.864 [2024-12-13 23:17:03.825092] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128512 00:36:24.864 [2024-12-13 23:17:03.825097] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:36:24.864 [2024-12-13 23:17:03.825107] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:24.864 [2024-12-13 23:17:03.825112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:24.864 [2024-12-13 23:17:03.825118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:24.864 [2024-12-13 23:17:03.825122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:24.864 [2024-12-13 23:17:03.825128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:24.864 [2024-12-13 23:17:03.825133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:24.864 [2024-12-13 23:17:03.825139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:24.864 [2024-12-13 23:17:03.825145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:36:24.864 [2024-12-13 23:17:03.825151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.864 [2024-12-13 23:17:03.834906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:24.864 [2024-12-13 23:17:03.834933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:24.864 [2024-12-13 23:17:03.834945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.744 ms 00:36:24.864 [2024-12-13 23:17:03.834951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.864 [2024-12-13 23:17:03.835221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:24.864 [2024-12-13 23:17:03.835237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:24.864 [2024-12-13 23:17:03.835244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:36:24.864 [2024-12-13 23:17:03.835250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.864 [2024-12-13 23:17:03.861192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.864 [2024-12-13 23:17:03.861223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:24.864 [2024-12-13 23:17:03.861231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.864 [2024-12-13 23:17:03.861237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.864 [2024-12-13 23:17:03.861277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.864 [2024-12-13 23:17:03.861284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:24.864 [2024-12-13 23:17:03.861289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.864 [2024-12-13 23:17:03.861295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.864 [2024-12-13 23:17:03.861331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.864 [2024-12-13 23:17:03.861338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:24.865 [2024-12-13 23:17:03.861347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.865 [2024-12-13 23:17:03.861353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.865 [2024-12-13 23:17:03.861365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.865 [2024-12-13 23:17:03.861371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:24.865 [2024-12-13 23:17:03.861376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.865 [2024-12-13 23:17:03.861381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.865 [2024-12-13 23:17:03.919407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.865 [2024-12-13 23:17:03.919448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:24.865 [2024-12-13 23:17:03.919457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.865 [2024-12-13 23:17:03.919463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.865 [2024-12-13 23:17:03.968025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.865 [2024-12-13 23:17:03.968063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:24.865 [2024-12-13 23:17:03.968072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.865 [2024-12-13 23:17:03.968078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.865 [2024-12-13 23:17:03.968130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.865 [2024-12-13 23:17:03.968138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:24.865 [2024-12-13 23:17:03.968144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.865 [2024-12-13 23:17:03.968153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.865 [2024-12-13 23:17:03.968178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.865 [2024-12-13 23:17:03.968185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:24.865 [2024-12-13 23:17:03.968190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.865 [2024-12-13 23:17:03.968195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.865 [2024-12-13 23:17:03.968253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.865 [2024-12-13 23:17:03.968261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:24.865 [2024-12-13 23:17:03.968267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.865 [2024-12-13 23:17:03.968272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.865 [2024-12-13 23:17:03.968293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.865 [2024-12-13 23:17:03.968299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:24.865 [2024-12-13 23:17:03.968305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.865 [2024-12-13 23:17:03.968311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.865 [2024-12-13 23:17:03.968338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.865 [2024-12-13 23:17:03.968344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:24.865 [2024-12-13 23:17:03.968350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.865 [2024-12-13 23:17:03.968356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.865 [2024-12-13 23:17:03.968387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:24.865 [2024-12-13 23:17:03.968394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:24.865 [2024-12-13 23:17:03.968399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:24.865 [2024-12-13 23:17:03.968405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:24.865 [2024-12-13 23:17:03.968494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 158.624 ms, result 0 00:36:26.242 00:36:26.242 00:36:26.242 23:17:05 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:36:26.243 [2024-12-13 23:17:05.189276] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:36:26.243 [2024-12-13 23:17:05.189400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89055 ] 00:36:26.243 [2024-12-13 23:17:05.345926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.499 [2024-12-13 23:17:05.432675] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.757 [2024-12-13 23:17:05.644033] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:26.757 [2024-12-13 23:17:05.644084] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:26.757 [2024-12-13 23:17:05.795146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.757 [2024-12-13 23:17:05.795185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:26.757 [2024-12-13 23:17:05.795195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:26.757 [2024-12-13 23:17:05.795201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.757 [2024-12-13 23:17:05.795234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.757 [2024-12-13 23:17:05.795244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:26.757 [2024-12-13 23:17:05.795250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:36:26.757 [2024-12-13 23:17:05.795256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.757 [2024-12-13 23:17:05.795268] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:26.757 [2024-12-13 23:17:05.795839] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:26.757 [2024-12-13 23:17:05.795857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.757 [2024-12-13 23:17:05.795863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:26.757 [2024-12-13 23:17:05.795870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:36:26.757 [2024-12-13 23:17:05.795876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.757 [2024-12-13 23:17:05.796084] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:36:26.757 [2024-12-13 23:17:05.796107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.757 [2024-12-13 23:17:05.796118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:26.757 [2024-12-13 23:17:05.796125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:36:26.757 [2024-12-13 23:17:05.796131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.757 [2024-12-13 23:17:05.796163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.757 [2024-12-13 23:17:05.796170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:26.757 [2024-12-13 23:17:05.796176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:36:26.757 [2024-12-13 23:17:05.796181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.757 [2024-12-13 23:17:05.796375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.757 [2024-12-13 23:17:05.796389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:26.757 [2024-12-13 23:17:05.796396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:36:26.757 [2024-12-13 23:17:05.796402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.757 [2024-12-13 23:17:05.796450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.757 [2024-12-13 23:17:05.796456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:26.757 [2024-12-13 23:17:05.796462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:36:26.757 [2024-12-13 23:17:05.796468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.757 [2024-12-13 23:17:05.796483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.757 [2024-12-13 23:17:05.796489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:26.757 [2024-12-13 23:17:05.796496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:26.757 [2024-12-13 23:17:05.796502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.757 [2024-12-13 23:17:05.796514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:26.757 [2024-12-13 23:17:05.799350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.757 [2024-12-13 23:17:05.799377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:26.757 [2024-12-13 23:17:05.799384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.838 ms 00:36:26.757 [2024-12-13 23:17:05.799390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.757 [2024-12-13 23:17:05.799415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.757 [2024-12-13 23:17:05.799422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:26.757 [2024-12-13 23:17:05.799442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:36:26.757 [2024-12-13 23:17:05.799448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.757 [2024-12-13 23:17:05.799480] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:26.757 [2024-12-13 23:17:05.799495] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:26.757 [2024-12-13 23:17:05.799523] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:26.757 [2024-12-13 23:17:05.799534] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:26.758 [2024-12-13 23:17:05.799611] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:26.758 [2024-12-13 23:17:05.799619] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:26.758 [2024-12-13 23:17:05.799628] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:26.758 [2024-12-13 23:17:05.799635] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:26.758 [2024-12-13 23:17:05.799642] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:26.758 [2024-12-13 23:17:05.799649] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:26.758 [2024-12-13 23:17:05.799655] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:26.758 [2024-12-13 23:17:05.799660] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:26.758 [2024-12-13 23:17:05.799665] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:26.758 [2024-12-13 23:17:05.799671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.799676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:26.758 [2024-12-13 23:17:05.799682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:36:26.758 [2024-12-13 23:17:05.799688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.799751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.799767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:26.758 [2024-12-13 23:17:05.799773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:36:26.758 [2024-12-13 23:17:05.799781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.799852] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:26.758 [2024-12-13 23:17:05.799859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:26.758 [2024-12-13 23:17:05.799866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:26.758 [2024-12-13 23:17:05.799871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:26.758 [2024-12-13 23:17:05.799877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:26.758 [2024-12-13 23:17:05.799882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:26.758 [2024-12-13 23:17:05.799886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:26.758 [2024-12-13 23:17:05.799891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:26.758 [2024-12-13 23:17:05.799896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:26.758 [2024-12-13 23:17:05.799901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:26.758 [2024-12-13 23:17:05.799906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:26.758 [2024-12-13 23:17:05.799912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:26.758 [2024-12-13 23:17:05.799917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:26.758 [2024-12-13 23:17:05.799922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:26.758 [2024-12-13 23:17:05.799927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:26.758 [2024-12-13 23:17:05.799936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:26.758 [2024-12-13 23:17:05.799941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:26.758 [2024-12-13 23:17:05.799945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:26.758 [2024-12-13 23:17:05.799950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:26.758 [2024-12-13 23:17:05.799955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:26.758 [2024-12-13 23:17:05.799960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:26.758 [2024-12-13 23:17:05.799966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:26.758 [2024-12-13 23:17:05.799971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:26.758 [2024-12-13 23:17:05.799975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:26.758 [2024-12-13 23:17:05.799980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:26.758 [2024-12-13 23:17:05.799985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:26.758 [2024-12-13 23:17:05.799989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:26.758 [2024-12-13 23:17:05.799994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:26.758 [2024-12-13 23:17:05.799999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:26.758 [2024-12-13 23:17:05.800004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:26.758 [2024-12-13 23:17:05.800008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:26.758 [2024-12-13 23:17:05.800013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:26.758 [2024-12-13 23:17:05.800018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:26.758 [2024-12-13 23:17:05.800023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:26.758 [2024-12-13 23:17:05.800028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:26.758 [2024-12-13 23:17:05.800032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:26.758 [2024-12-13 23:17:05.800037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:26.758 [2024-12-13 23:17:05.800042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:26.758 [2024-12-13 23:17:05.800047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:26.758 [2024-12-13 23:17:05.800051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:26.758 [2024-12-13 23:17:05.800056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:26.758 [2024-12-13 23:17:05.800061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:26.758 [2024-12-13 23:17:05.800066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:26.758 [2024-12-13 23:17:05.800072] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:26.758 [2024-12-13 23:17:05.800078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:26.758 [2024-12-13 23:17:05.800083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:26.758 [2024-12-13 23:17:05.800088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:26.758 [2024-12-13 23:17:05.800096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:26.758 [2024-12-13 23:17:05.800101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:26.758 [2024-12-13 23:17:05.800105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:26.758 [2024-12-13 23:17:05.800110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:26.758 [2024-12-13 23:17:05.800115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:26.758 [2024-12-13 23:17:05.800120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:26.758 [2024-12-13 23:17:05.800126] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:26.758 [2024-12-13 23:17:05.800133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:26.758 [2024-12-13 23:17:05.800139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:26.758 [2024-12-13 23:17:05.800144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:26.758 [2024-12-13 23:17:05.800150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:26.758 [2024-12-13 23:17:05.800155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:26.758 [2024-12-13 23:17:05.800160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:26.758 [2024-12-13 23:17:05.800165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:26.758 [2024-12-13 23:17:05.800170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:26.758 [2024-12-13 23:17:05.800176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:26.758 [2024-12-13 23:17:05.800181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:26.758 [2024-12-13 23:17:05.800186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:26.758 [2024-12-13 23:17:05.800191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:26.758 [2024-12-13 23:17:05.800196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:26.758 [2024-12-13 23:17:05.800201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:26.758 [2024-12-13 23:17:05.800206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:26.758 [2024-12-13 23:17:05.800212] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:26.758 [2024-12-13 23:17:05.800217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:26.758 [2024-12-13 23:17:05.800223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:26.758 [2024-12-13 23:17:05.800229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:26.758 [2024-12-13 23:17:05.800234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:26.758 [2024-12-13 23:17:05.800239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:26.758 [2024-12-13 23:17:05.800245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.800251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:26.758 [2024-12-13 23:17:05.800256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:36:26.758 [2024-12-13 23:17:05.800261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.818852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.818876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:26.758 [2024-12-13 23:17:05.818884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.559 ms 00:36:26.758 [2024-12-13 23:17:05.818890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.818950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.818956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:26.758 [2024-12-13 23:17:05.818965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:36:26.758 [2024-12-13 23:17:05.818970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.866046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.866079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:26.758 [2024-12-13 23:17:05.866088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.040 ms 00:36:26.758 [2024-12-13 23:17:05.866094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.866131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.866139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:26.758 [2024-12-13 23:17:05.866146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:36:26.758 [2024-12-13 23:17:05.866151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.866223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.866232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:26.758 [2024-12-13 23:17:05.866238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:36:26.758 [2024-12-13 23:17:05.866244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.866331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.866339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:26.758 [2024-12-13 23:17:05.866346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:36:26.758 [2024-12-13 23:17:05.866351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.876855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.876882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:26.758 [2024-12-13 23:17:05.876889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.490 ms 00:36:26.758 [2024-12-13 23:17:05.876895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.876983] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:36:26.758 [2024-12-13 23:17:05.876992] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:26.758 [2024-12-13 23:17:05.876999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.877007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:26.758 [2024-12-13 23:17:05.877013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:36:26.758 [2024-12-13 23:17:05.877019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.886143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.886166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:26.758 [2024-12-13 23:17:05.886174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.112 ms 00:36:26.758 [2024-12-13 23:17:05.886181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.886269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.886276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:26.758 [2024-12-13 23:17:05.886282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:36:26.758 [2024-12-13 23:17:05.886291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.886314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.886321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:26.758 [2024-12-13 23:17:05.886328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:36:26.758 [2024-12-13 23:17:05.886339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.886779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.886797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:26.758 [2024-12-13 23:17:05.886804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:36:26.758 [2024-12-13 23:17:05.886809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.758 [2024-12-13 23:17:05.886824] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:36:26.758 [2024-12-13 23:17:05.886831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.758 [2024-12-13 23:17:05.886838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:26.758 [2024-12-13 23:17:05.886844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:26.758 [2024-12-13 23:17:05.886849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.018 [2024-12-13 23:17:05.895663] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:27.018 [2024-12-13 23:17:05.895783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.018 [2024-12-13 23:17:05.895792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:27.018 [2024-12-13 23:17:05.895798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.921 ms 00:36:27.018 [2024-12-13 23:17:05.895804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.018 [2024-12-13 23:17:05.897466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.018 [2024-12-13 23:17:05.897488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:27.018 [2024-12-13 23:17:05.897496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:36:27.018 [2024-12-13 23:17:05.897501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.018 [2024-12-13 23:17:05.897559] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:36:27.018 [2024-12-13 23:17:05.897918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.018 [2024-12-13 23:17:05.897935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:27.018 [2024-12-13 23:17:05.897942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:36:27.018 [2024-12-13 23:17:05.897948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.018 [2024-12-13 23:17:05.897967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.018 [2024-12-13 23:17:05.897975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:27.018 [2024-12-13 23:17:05.897980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:27.018 [2024-12-13 23:17:05.897986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.018 [2024-12-13 23:17:05.898008] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:27.018 [2024-12-13 23:17:05.898016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.018 [2024-12-13 23:17:05.898022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:27.018 [2024-12-13 23:17:05.898027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:36:27.018 [2024-12-13 23:17:05.898033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.018 [2024-12-13 23:17:05.916067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.018 [2024-12-13 23:17:05.916094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:27.018 [2024-12-13 23:17:05.916102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.022 ms 00:36:27.018 [2024-12-13 23:17:05.916109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.018 [2024-12-13 23:17:05.916161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:27.018 [2024-12-13 23:17:05.916168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:27.018 [2024-12-13 23:17:05.916175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:36:27.018 [2024-12-13 23:17:05.916180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:27.018 [2024-12-13 23:17:05.917162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.694 ms, result 0 00:36:27.962  [2024-12-13T23:17:08.491Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-13T23:17:09.063Z] Copying: 43/1024 [MB] (23 MBps) [2024-12-13T23:17:10.448Z] Copying: 54/1024 [MB] (10 MBps) [2024-12-13T23:17:11.391Z] Copying: 66/1024 [MB] (11 MBps) [2024-12-13T23:17:12.337Z] Copying: 88/1024 [MB] (22 MBps) [2024-12-13T23:17:13.280Z] Copying: 104/1024 [MB] (15 MBps) [2024-12-13T23:17:14.224Z] Copying: 120/1024 [MB] (16 MBps) [2024-12-13T23:17:15.170Z] Copying: 135/1024 [MB] (14 MBps) [2024-12-13T23:17:16.111Z] Copying: 151/1024 [MB] (16 MBps) [2024-12-13T23:17:17.495Z] Copying: 171/1024 [MB] (20 MBps) [2024-12-13T23:17:18.068Z] Copying: 188/1024 [MB] (17 MBps) [2024-12-13T23:17:19.454Z] Copying: 207/1024 [MB] (18 MBps) [2024-12-13T23:17:20.396Z] Copying: 221/1024 [MB] (14 MBps) [2024-12-13T23:17:21.338Z] Copying: 231/1024 [MB] (10 MBps) [2024-12-13T23:17:22.283Z] Copying: 242/1024 [MB] (10 MBps) [2024-12-13T23:17:23.227Z] Copying: 252/1024 [MB] (10 MBps) [2024-12-13T23:17:24.229Z] Copying: 263/1024 [MB] (10 MBps) [2024-12-13T23:17:25.176Z] Copying: 273/1024 [MB] (10 MBps) [2024-12-13T23:17:26.121Z] Copying: 284/1024 [MB] (10 MBps) [2024-12-13T23:17:27.067Z] Copying: 295/1024 [MB] (10 MBps) [2024-12-13T23:17:28.459Z] Copying: 317/1024 [MB] (21 MBps) [2024-12-13T23:17:29.403Z] Copying: 333/1024 [MB] (16 MBps) [2024-12-13T23:17:30.347Z] Copying: 344/1024 [MB] (10 MBps) [2024-12-13T23:17:31.293Z] Copying: 354/1024 [MB] (10 MBps) [2024-12-13T23:17:32.238Z] Copying: 365/1024 [MB] (10 MBps) [2024-12-13T23:17:33.183Z] Copying: 376/1024 [MB] (10 MBps) [2024-12-13T23:17:34.128Z] Copying: 387/1024 [MB] (11 MBps) [2024-12-13T23:17:35.073Z] Copying: 408/1024 [MB] (20 MBps) [2024-12-13T23:17:36.460Z] Copying: 424/1024 [MB] (16 MBps) [2024-12-13T23:17:37.403Z] Copying: 439/1024 [MB] (15 MBps) [2024-12-13T23:17:38.346Z] Copying: 453/1024 [MB] (14 MBps) [2024-12-13T23:17:39.289Z] Copying: 475/1024 [MB] (21 MBps) [2024-12-13T23:17:40.236Z] Copying: 495/1024 [MB] (20 MBps) [2024-12-13T23:17:41.181Z] Copying: 515/1024 [MB] (20 MBps) [2024-12-13T23:17:42.126Z] Copying: 538/1024 [MB] (22 MBps) [2024-12-13T23:17:43.070Z] Copying: 556/1024 [MB] (18 MBps) [2024-12-13T23:17:44.459Z] Copying: 576/1024 [MB] (19 MBps) [2024-12-13T23:17:45.404Z] Copying: 587/1024 [MB] (11 MBps) [2024-12-13T23:17:46.349Z] Copying: 600/1024 [MB] (13 MBps) [2024-12-13T23:17:47.332Z] Copying: 611/1024 [MB] (11 MBps) [2024-12-13T23:17:48.281Z] Copying: 633/1024 [MB] (21 MBps) [2024-12-13T23:17:49.224Z] Copying: 650/1024 [MB] (17 MBps) [2024-12-13T23:17:50.170Z] Copying: 666/1024 [MB] (15 MBps) [2024-12-13T23:17:51.113Z] Copying: 676/1024 [MB] (10 MBps) [2024-12-13T23:17:52.501Z] Copying: 686/1024 [MB] (10 MBps) [2024-12-13T23:17:53.075Z] Copying: 697/1024 [MB] (10 MBps) [2024-12-13T23:17:54.461Z] Copying: 708/1024 [MB] (10 MBps) [2024-12-13T23:17:55.409Z] Copying: 719/1024 [MB] (11 MBps) [2024-12-13T23:17:56.353Z] Copying: 731/1024 [MB] (12 MBps) [2024-12-13T23:17:57.298Z] Copying: 742/1024 [MB] (10 MBps) [2024-12-13T23:17:58.242Z] Copying: 754/1024 [MB] (11 MBps) [2024-12-13T23:17:59.187Z] Copying: 765/1024 [MB] (10 MBps) [2024-12-13T23:18:00.133Z] Copying: 777/1024 [MB] (11 MBps) [2024-12-13T23:18:01.076Z] Copying: 788/1024 [MB] (11 MBps) [2024-12-13T23:18:02.464Z] Copying: 800/1024 [MB] (11 MBps) [2024-12-13T23:18:03.410Z] Copying: 812/1024 [MB] (12 MBps) [2024-12-13T23:18:04.355Z] Copying: 822/1024 [MB] (10 MBps) [2024-12-13T23:18:05.299Z] Copying: 833/1024 [MB] (10 MBps) [2024-12-13T23:18:06.244Z] Copying: 843/1024 [MB] (10 MBps) [2024-12-13T23:18:07.189Z] Copying: 854/1024 [MB] (10 MBps) [2024-12-13T23:18:08.135Z] Copying: 867/1024 [MB] (13 MBps) [2024-12-13T23:18:09.079Z] Copying: 879/1024 [MB] (11 MBps) [2024-12-13T23:18:10.522Z] Copying: 891/1024 [MB] (12 MBps) [2024-12-13T23:18:11.117Z] Copying: 901/1024 [MB] (10 MBps) [2024-12-13T23:18:12.063Z] Copying: 922/1024 [MB] (20 MBps) [2024-12-13T23:18:13.457Z] Copying: 933/1024 [MB] (11 MBps) [2024-12-13T23:18:14.409Z] Copying: 943/1024 [MB] (10 MBps) [2024-12-13T23:18:15.357Z] Copying: 958/1024 [MB] (15 MBps) [2024-12-13T23:18:16.295Z] Copying: 976/1024 [MB] (17 MBps) [2024-12-13T23:18:17.240Z] Copying: 1003/1024 [MB] (27 MBps) [2024-12-13T23:18:17.502Z] Copying: 1020/1024 [MB] (16 MBps) [2024-12-13T23:18:17.502Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-13 23:18:17.359057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:38.362 [2024-12-13 23:18:17.359164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:38.362 [2024-12-13 23:18:17.359188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:38.362 [2024-12-13 23:18:17.359201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.362 [2024-12-13 23:18:17.359235] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:38.363 [2024-12-13 23:18:17.364489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:38.363 [2024-12-13 23:18:17.364536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:38.363 [2024-12-13 23:18:17.364550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.230 ms 00:37:38.363 [2024-12-13 23:18:17.364567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.363 [2024-12-13 23:18:17.365390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:38.363 [2024-12-13 23:18:17.365425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:38.363 [2024-12-13 23:18:17.365438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:37:38.363 [2024-12-13 23:18:17.365448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.363 [2024-12-13 23:18:17.365481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:38.363 [2024-12-13 23:18:17.365492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:37:38.363 [2024-12-13 23:18:17.365501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:37:38.363 [2024-12-13 23:18:17.365510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.363 [2024-12-13 23:18:17.365613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:38.363 [2024-12-13 23:18:17.365628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:37:38.363 [2024-12-13 23:18:17.365637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:37:38.363 [2024-12-13 23:18:17.365646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.363 [2024-12-13 23:18:17.365662] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:38.363 [2024-12-13 23:18:17.365676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:37:38.363 [2024-12-13 23:18:17.365687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.365994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:38.363 [2024-12-13 23:18:17.366343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:38.364 [2024-12-13 23:18:17.366566] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:38.364 [2024-12-13 23:18:17.366575] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2 00:37:38.364 [2024-12-13 23:18:17.366584] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:37:38.364 [2024-12-13 23:18:17.366592] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2592 00:37:38.364 [2024-12-13 23:18:17.366600] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2560 00:37:38.364 [2024-12-13 23:18:17.366636] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:37:38.364 [2024-12-13 23:18:17.366648] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:38.364 [2024-12-13 23:18:17.366658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:38.364 [2024-12-13 23:18:17.366666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:38.364 [2024-12-13 23:18:17.366674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:38.364 [2024-12-13 23:18:17.366682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:38.364 [2024-12-13 23:18:17.366690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:38.364 [2024-12-13 23:18:17.366699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:38.364 [2024-12-13 23:18:17.366721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:37:38.364 [2024-12-13 23:18:17.366730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.364 [2024-12-13 23:18:17.381309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:38.364 [2024-12-13 23:18:17.381359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:38.364 [2024-12-13 23:18:17.381381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.561 ms 00:37:38.364 [2024-12-13 23:18:17.381389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.364 [2024-12-13 23:18:17.381817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:38.364 [2024-12-13 23:18:17.381839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:38.364 [2024-12-13 23:18:17.381850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:37:38.364 [2024-12-13 23:18:17.381858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.364 [2024-12-13 23:18:17.418832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.364 [2024-12-13 23:18:17.418882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:38.364 [2024-12-13 23:18:17.418893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.364 [2024-12-13 23:18:17.418902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.364 [2024-12-13 23:18:17.418978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.364 [2024-12-13 23:18:17.418987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:38.364 [2024-12-13 23:18:17.418996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.364 [2024-12-13 23:18:17.419004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.364 [2024-12-13 23:18:17.419060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.364 [2024-12-13 23:18:17.419075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:38.364 [2024-12-13 23:18:17.419083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.364 [2024-12-13 23:18:17.419092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.364 [2024-12-13 23:18:17.419108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.364 [2024-12-13 23:18:17.419117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:38.364 [2024-12-13 23:18:17.419125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.364 [2024-12-13 23:18:17.419133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.626 [2024-12-13 23:18:17.505201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.626 [2024-12-13 23:18:17.505263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:38.626 [2024-12-13 23:18:17.505277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.626 [2024-12-13 23:18:17.505287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.626 [2024-12-13 23:18:17.575382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.626 [2024-12-13 23:18:17.575458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:38.626 [2024-12-13 23:18:17.575472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.626 [2024-12-13 23:18:17.575480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.626 [2024-12-13 23:18:17.575563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.626 [2024-12-13 23:18:17.575574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:38.626 [2024-12-13 23:18:17.575590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.626 [2024-12-13 23:18:17.575599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.626 [2024-12-13 23:18:17.575637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.626 [2024-12-13 23:18:17.575647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:38.626 [2024-12-13 23:18:17.575656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.626 [2024-12-13 23:18:17.575664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.626 [2024-12-13 23:18:17.575744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.626 [2024-12-13 23:18:17.575777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:38.626 [2024-12-13 23:18:17.575787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.626 [2024-12-13 23:18:17.575799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.626 [2024-12-13 23:18:17.575830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.626 [2024-12-13 23:18:17.575840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:38.626 [2024-12-13 23:18:17.575848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.626 [2024-12-13 23:18:17.575856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.626 [2024-12-13 23:18:17.575896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.626 [2024-12-13 23:18:17.575907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:38.626 [2024-12-13 23:18:17.575915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.626 [2024-12-13 23:18:17.575926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.626 [2024-12-13 23:18:17.575971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:38.626 [2024-12-13 23:18:17.575982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:38.626 [2024-12-13 23:18:17.575991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:38.626 [2024-12-13 23:18:17.575999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:38.626 [2024-12-13 23:18:17.576135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 217.062 ms, result 0 00:37:39.199 00:37:39.199 00:37:39.460 23:18:18 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:37:42.007 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:42.007 Process with pid 86467 is not found 00:37:42.007 Remove shared memory files 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 86467 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 86467 ']' 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 86467 00:37:42.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86467) - No such process 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 86467 is not found' 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_band_md /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_l2p_l1 /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_l2p_l2 /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_l2p_l2_ctx /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_nvc_md /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_p2l_pool /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_sb /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_sb_shm /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_trim_bitmap /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_trim_log /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_trim_md /dev/hugepages/ftl_29a7cbbc-7ef0-48c6-ad1f-d93b6478a8f2_vmap 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:37:42.007 00:37:42.007 real 5m31.594s 00:37:42.007 user 5m19.105s 00:37:42.007 sys 0m12.287s 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:42.007 ************************************ 00:37:42.007 END TEST ftl_restore_fast 00:37:42.007 ************************************ 00:37:42.007 23:18:20 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:37:42.007 23:18:20 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:37:42.007 23:18:20 ftl -- ftl/ftl.sh@14 -- # killprocess 76738 00:37:42.007 23:18:20 ftl -- common/autotest_common.sh@954 -- # '[' -z 76738 ']' 00:37:42.007 Process with pid 76738 is not found 00:37:42.007 23:18:20 ftl -- common/autotest_common.sh@958 -- # kill -0 76738 00:37:42.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76738) - No such process 00:37:42.007 23:18:20 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76738 is not found' 00:37:42.007 23:18:20 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:37:42.007 23:18:20 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=89824 00:37:42.007 23:18:20 ftl -- ftl/ftl.sh@20 -- # waitforlisten 89824 00:37:42.007 23:18:20 ftl -- common/autotest_common.sh@835 -- # '[' -z 89824 ']' 00:37:42.007 23:18:20 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:42.007 23:18:20 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:42.007 23:18:20 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:42.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:42.007 23:18:20 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:42.007 23:18:20 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:42.007 23:18:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:42.007 [2024-12-13 23:18:20.972131] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:42.007 [2024-12-13 23:18:20.972943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89824 ] 00:37:42.007 [2024-12-13 23:18:21.142243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.269 [2024-12-13 23:18:21.287339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.233 23:18:22 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:43.233 23:18:22 ftl -- common/autotest_common.sh@868 -- # return 0 00:37:43.233 23:18:22 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:37:43.233 nvme0n1 00:37:43.233 23:18:22 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:37:43.233 23:18:22 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:43.233 23:18:22 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:43.494 23:18:22 ftl -- ftl/common.sh@28 -- # stores=df3ada73-4ded-4d8d-9b20-4b7327e023b0 00:37:43.494 23:18:22 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:37:43.494 23:18:22 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df3ada73-4ded-4d8d-9b20-4b7327e023b0 00:37:43.756 23:18:22 ftl -- ftl/ftl.sh@23 -- # killprocess 89824 00:37:43.756 23:18:22 ftl -- common/autotest_common.sh@954 -- # '[' -z 89824 ']' 00:37:43.756 23:18:22 ftl -- common/autotest_common.sh@958 -- # kill -0 89824 00:37:43.756 23:18:22 ftl -- common/autotest_common.sh@959 -- # uname 00:37:43.756 23:18:22 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:43.756 23:18:22 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89824 00:37:43.756 23:18:22 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:43.756 killing process with pid 89824 00:37:43.756 23:18:22 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:43.756 23:18:22 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89824' 00:37:43.756 23:18:22 ftl -- common/autotest_common.sh@973 -- # kill 89824 00:37:43.756 23:18:22 ftl -- common/autotest_common.sh@978 -- # wait 89824 00:37:45.667 23:18:24 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:45.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:45.667 Waiting for block devices as requested 00:37:45.667 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:45.667 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:45.667 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:37:45.927 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:37:51.214 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:37:51.214 Remove shared memory files 00:37:51.214 23:18:29 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:37:51.214 23:18:29 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:51.214 23:18:29 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:37:51.214 23:18:29 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:37:51.214 23:18:29 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:37:51.214 23:18:29 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:51.214 23:18:29 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:37:51.214 ************************************ 00:37:51.214 END TEST ftl 00:37:51.214 ************************************ 00:37:51.214 00:37:51.214 real 20m8.401s 00:37:51.214 user 21m49.675s 00:37:51.214 sys 1m31.269s 00:37:51.214 23:18:29 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.214 23:18:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:51.214 23:18:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:51.214 23:18:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:51.214 23:18:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:51.214 23:18:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:51.214 23:18:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:51.214 23:18:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:51.214 23:18:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:51.214 23:18:30 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:51.214 23:18:30 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:51.214 23:18:30 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:51.214 23:18:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:51.214 23:18:30 -- common/autotest_common.sh@10 -- # set +x 00:37:51.214 23:18:30 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:51.214 23:18:30 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:51.214 23:18:30 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:51.214 23:18:30 -- common/autotest_common.sh@10 -- # set +x 00:37:52.597 INFO: APP EXITING 00:37:52.597 INFO: killing all VMs 00:37:52.597 INFO: killing vhost app 00:37:52.597 INFO: EXIT DONE 00:37:52.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:53.121 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:37:53.121 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:37:53.121 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:37:53.121 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:37:53.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:53.954 Cleaning 00:37:53.954 Removing: /var/run/dpdk/spdk0/config 00:37:53.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:53.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:53.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:53.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:53.954 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:53.954 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:53.954 Removing: /var/run/dpdk/spdk0 00:37:53.954 Removing: /var/run/dpdk/spdk_pid58714 00:37:53.954 Removing: /var/run/dpdk/spdk_pid58916 00:37:53.954 Removing: /var/run/dpdk/spdk_pid59129 00:37:53.954 Removing: /var/run/dpdk/spdk_pid59226 00:37:53.954 Removing: /var/run/dpdk/spdk_pid59261 00:37:53.954 Removing: /var/run/dpdk/spdk_pid59378 00:37:53.954 Removing: /var/run/dpdk/spdk_pid59396 00:37:53.954 Removing: /var/run/dpdk/spdk_pid59590 00:37:53.954 Removing: /var/run/dpdk/spdk_pid59695 00:37:53.954 Removing: /var/run/dpdk/spdk_pid59791 00:37:53.954 Removing: /var/run/dpdk/spdk_pid59902 00:37:53.954 Removing: /var/run/dpdk/spdk_pid59988 00:37:53.954 Removing: /var/run/dpdk/spdk_pid60033 00:37:53.954 Removing: /var/run/dpdk/spdk_pid60064 00:37:53.954 Removing: /var/run/dpdk/spdk_pid60140 00:37:53.954 Removing: /var/run/dpdk/spdk_pid60218 00:37:53.954 Removing: /var/run/dpdk/spdk_pid60650 00:37:53.954 Removing: /var/run/dpdk/spdk_pid60703 00:37:53.954 Removing: /var/run/dpdk/spdk_pid60766 00:37:53.954 Removing: /var/run/dpdk/spdk_pid60782 00:37:53.954 Removing: /var/run/dpdk/spdk_pid60873 00:37:53.954 Removing: /var/run/dpdk/spdk_pid60889 00:37:53.954 Removing: /var/run/dpdk/spdk_pid60991 00:37:53.954 Removing: /var/run/dpdk/spdk_pid61007 00:37:53.954 Removing: /var/run/dpdk/spdk_pid61066 00:37:53.954 Removing: /var/run/dpdk/spdk_pid61084 00:37:53.954 Removing: /var/run/dpdk/spdk_pid61141 00:37:53.954 Removing: /var/run/dpdk/spdk_pid61155 00:37:53.954 Removing: /var/run/dpdk/spdk_pid61315 00:37:53.954 Removing: /var/run/dpdk/spdk_pid61351 00:37:53.954 Removing: /var/run/dpdk/spdk_pid61435 00:37:53.954 Removing: /var/run/dpdk/spdk_pid61607 00:37:53.954 Removing: /var/run/dpdk/spdk_pid61691 00:37:53.954 Removing: /var/run/dpdk/spdk_pid61722 00:37:53.954 Removing: /var/run/dpdk/spdk_pid62148 00:37:53.954 Removing: /var/run/dpdk/spdk_pid62241 00:37:53.954 Removing: /var/run/dpdk/spdk_pid62358 00:37:53.954 Removing: /var/run/dpdk/spdk_pid62405 00:37:53.954 Removing: /var/run/dpdk/spdk_pid62431 00:37:53.954 Removing: /var/run/dpdk/spdk_pid62515 00:37:53.954 Removing: /var/run/dpdk/spdk_pid63138 00:37:53.954 Removing: /var/run/dpdk/spdk_pid63174 00:37:53.954 Removing: /var/run/dpdk/spdk_pid63645 00:37:53.954 Removing: /var/run/dpdk/spdk_pid63743 00:37:53.954 Removing: /var/run/dpdk/spdk_pid63859 00:37:54.216 Removing: /var/run/dpdk/spdk_pid63912 00:37:54.216 Removing: /var/run/dpdk/spdk_pid63932 00:37:54.216 Removing: /var/run/dpdk/spdk_pid63963 00:37:54.216 Removing: /var/run/dpdk/spdk_pid65811 00:37:54.216 Removing: /var/run/dpdk/spdk_pid65948 00:37:54.216 Removing: /var/run/dpdk/spdk_pid65953 00:37:54.216 Removing: /var/run/dpdk/spdk_pid65965 00:37:54.216 Removing: /var/run/dpdk/spdk_pid66010 00:37:54.216 Removing: /var/run/dpdk/spdk_pid66014 00:37:54.216 Removing: /var/run/dpdk/spdk_pid66026 00:37:54.216 Removing: /var/run/dpdk/spdk_pid66071 00:37:54.216 Removing: /var/run/dpdk/spdk_pid66075 00:37:54.216 Removing: /var/run/dpdk/spdk_pid66087 00:37:54.216 Removing: /var/run/dpdk/spdk_pid66132 00:37:54.216 Removing: /var/run/dpdk/spdk_pid66136 00:37:54.216 Removing: /var/run/dpdk/spdk_pid66148 00:37:54.216 Removing: /var/run/dpdk/spdk_pid67529 00:37:54.216 Removing: /var/run/dpdk/spdk_pid67626 00:37:54.216 Removing: /var/run/dpdk/spdk_pid69024 00:37:54.216 Removing: /var/run/dpdk/spdk_pid70769 00:37:54.216 Removing: /var/run/dpdk/spdk_pid70838 00:37:54.216 Removing: /var/run/dpdk/spdk_pid70914 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71024 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71115 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71211 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71284 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71355 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71459 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71556 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71652 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71717 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71796 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71900 00:37:54.216 Removing: /var/run/dpdk/spdk_pid71992 00:37:54.216 Removing: /var/run/dpdk/spdk_pid72087 00:37:54.216 Removing: /var/run/dpdk/spdk_pid72156 00:37:54.216 Removing: /var/run/dpdk/spdk_pid72231 00:37:54.216 Removing: /var/run/dpdk/spdk_pid72341 00:37:54.216 Removing: /var/run/dpdk/spdk_pid72427 00:37:54.216 Removing: /var/run/dpdk/spdk_pid72523 00:37:54.216 Removing: /var/run/dpdk/spdk_pid72597 00:37:54.216 Removing: /var/run/dpdk/spdk_pid72671 00:37:54.216 Removing: /var/run/dpdk/spdk_pid72745 00:37:54.216 Removing: /var/run/dpdk/spdk_pid72814 00:37:54.216 Removing: /var/run/dpdk/spdk_pid72917 00:37:54.216 Removing: /var/run/dpdk/spdk_pid73008 00:37:54.216 Removing: /var/run/dpdk/spdk_pid73103 00:37:54.216 Removing: /var/run/dpdk/spdk_pid73171 00:37:54.216 Removing: /var/run/dpdk/spdk_pid73244 00:37:54.216 Removing: /var/run/dpdk/spdk_pid73320 00:37:54.216 Removing: /var/run/dpdk/spdk_pid73394 00:37:54.216 Removing: /var/run/dpdk/spdk_pid73498 00:37:54.216 Removing: /var/run/dpdk/spdk_pid73589 00:37:54.216 Removing: /var/run/dpdk/spdk_pid73727 00:37:54.216 Removing: /var/run/dpdk/spdk_pid74011 00:37:54.216 Removing: /var/run/dpdk/spdk_pid74049 00:37:54.216 Removing: /var/run/dpdk/spdk_pid74499 00:37:54.216 Removing: /var/run/dpdk/spdk_pid74696 00:37:54.216 Removing: /var/run/dpdk/spdk_pid74792 00:37:54.216 Removing: /var/run/dpdk/spdk_pid74900 00:37:54.216 Removing: /var/run/dpdk/spdk_pid74955 00:37:54.216 Removing: /var/run/dpdk/spdk_pid74975 00:37:54.216 Removing: /var/run/dpdk/spdk_pid75273 00:37:54.216 Removing: /var/run/dpdk/spdk_pid75329 00:37:54.216 Removing: /var/run/dpdk/spdk_pid75402 00:37:54.216 Removing: /var/run/dpdk/spdk_pid75797 00:37:54.216 Removing: /var/run/dpdk/spdk_pid75937 00:37:54.216 Removing: /var/run/dpdk/spdk_pid76738 00:37:54.216 Removing: /var/run/dpdk/spdk_pid76870 00:37:54.216 Removing: /var/run/dpdk/spdk_pid77034 00:37:54.216 Removing: /var/run/dpdk/spdk_pid77137 00:37:54.216 Removing: /var/run/dpdk/spdk_pid77446 00:37:54.216 Removing: /var/run/dpdk/spdk_pid77705 00:37:54.216 Removing: /var/run/dpdk/spdk_pid78064 00:37:54.216 Removing: /var/run/dpdk/spdk_pid78240 00:37:54.216 Removing: /var/run/dpdk/spdk_pid78453 00:37:54.216 Removing: /var/run/dpdk/spdk_pid78500 00:37:54.216 Removing: /var/run/dpdk/spdk_pid78700 00:37:54.216 Removing: /var/run/dpdk/spdk_pid78719 00:37:54.216 Removing: /var/run/dpdk/spdk_pid78772 00:37:54.216 Removing: /var/run/dpdk/spdk_pid79036 00:37:54.216 Removing: /var/run/dpdk/spdk_pid79274 00:37:54.216 Removing: /var/run/dpdk/spdk_pid79734 00:37:54.216 Removing: /var/run/dpdk/spdk_pid80447 00:37:54.216 Removing: /var/run/dpdk/spdk_pid81302 00:37:54.216 Removing: /var/run/dpdk/spdk_pid82221 00:37:54.216 Removing: /var/run/dpdk/spdk_pid82376 00:37:54.216 Removing: /var/run/dpdk/spdk_pid82453 00:37:54.216 Removing: /var/run/dpdk/spdk_pid82802 00:37:54.216 Removing: /var/run/dpdk/spdk_pid82863 00:37:54.216 Removing: /var/run/dpdk/spdk_pid83821 00:37:54.216 Removing: /var/run/dpdk/spdk_pid84448 00:37:54.216 Removing: /var/run/dpdk/spdk_pid85420 00:37:54.216 Removing: /var/run/dpdk/spdk_pid85545 00:37:54.216 Removing: /var/run/dpdk/spdk_pid85589 00:37:54.216 Removing: /var/run/dpdk/spdk_pid85648 00:37:54.216 Removing: /var/run/dpdk/spdk_pid85700 00:37:54.216 Removing: /var/run/dpdk/spdk_pid85753 00:37:54.216 Removing: /var/run/dpdk/spdk_pid85961 00:37:54.216 Removing: /var/run/dpdk/spdk_pid86054 00:37:54.216 Removing: /var/run/dpdk/spdk_pid86123 00:37:54.478 Removing: /var/run/dpdk/spdk_pid86211 00:37:54.478 Removing: /var/run/dpdk/spdk_pid86256 00:37:54.478 Removing: /var/run/dpdk/spdk_pid86317 00:37:54.478 Removing: /var/run/dpdk/spdk_pid86467 00:37:54.478 Removing: /var/run/dpdk/spdk_pid86707 00:37:54.478 Removing: /var/run/dpdk/spdk_pid87520 00:37:54.478 Removing: /var/run/dpdk/spdk_pid88352 00:37:54.478 Removing: /var/run/dpdk/spdk_pid89055 00:37:54.478 Removing: /var/run/dpdk/spdk_pid89824 00:37:54.478 Clean 00:37:54.478 23:18:33 -- common/autotest_common.sh@1453 -- # return 0 00:37:54.478 23:18:33 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:54.478 23:18:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:54.478 23:18:33 -- common/autotest_common.sh@10 -- # set +x 00:37:54.478 23:18:33 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:54.478 23:18:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:54.478 23:18:33 -- common/autotest_common.sh@10 -- # set +x 00:37:54.478 23:18:33 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:54.478 23:18:33 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:54.478 23:18:33 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:54.478 23:18:33 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:54.478 23:18:33 -- spdk/autotest.sh@398 -- # hostname 00:37:54.478 23:18:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:54.740 geninfo: WARNING: invalid characters removed from testname! 00:38:21.393 23:18:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:23.943 23:19:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:26.493 23:19:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:29.799 23:19:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:32.351 23:19:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:34.903 23:19:13 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:36.818 23:19:15 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:37.079 23:19:15 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:37.079 23:19:15 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:38:37.079 23:19:15 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:37.079 23:19:15 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:37.079 23:19:15 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:37.079 + [[ -n 5023 ]] 00:38:37.079 + sudo kill 5023 00:38:37.090 [Pipeline] } 00:38:37.106 [Pipeline] // timeout 00:38:37.111 [Pipeline] } 00:38:37.125 [Pipeline] // stage 00:38:37.131 [Pipeline] } 00:38:37.145 [Pipeline] // catchError 00:38:37.154 [Pipeline] stage 00:38:37.156 [Pipeline] { (Stop VM) 00:38:37.169 [Pipeline] sh 00:38:37.457 + vagrant halt 00:38:39.999 ==> default: Halting domain... 00:38:45.302 [Pipeline] sh 00:38:45.639 + vagrant destroy -f 00:38:48.184 ==> default: Removing domain... 00:38:48.459 [Pipeline] sh 00:38:48.744 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:38:48.754 [Pipeline] } 00:38:48.767 [Pipeline] // stage 00:38:48.771 [Pipeline] } 00:38:48.784 [Pipeline] // dir 00:38:48.789 [Pipeline] } 00:38:48.797 [Pipeline] // wrap 00:38:48.803 [Pipeline] } 00:38:48.814 [Pipeline] // catchError 00:38:48.821 [Pipeline] stage 00:38:48.824 [Pipeline] { (Epilogue) 00:38:48.834 [Pipeline] sh 00:38:49.116 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:54.405 [Pipeline] catchError 00:38:54.407 [Pipeline] { 00:38:54.421 [Pipeline] sh 00:38:54.705 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:54.705 Artifacts sizes are good 00:38:54.715 [Pipeline] } 00:38:54.728 [Pipeline] // catchError 00:38:54.739 [Pipeline] archiveArtifacts 00:38:54.746 Archiving artifacts 00:38:54.854 [Pipeline] cleanWs 00:38:54.866 [WS-CLEANUP] Deleting project workspace... 00:38:54.866 [WS-CLEANUP] Deferred wipeout is used... 00:38:54.874 [WS-CLEANUP] done 00:38:54.876 [Pipeline] } 00:38:54.891 [Pipeline] // stage 00:38:54.896 [Pipeline] } 00:38:54.911 [Pipeline] // node 00:38:54.916 [Pipeline] End of Pipeline 00:38:54.976 Finished: SUCCESS